AI in Graduate Education at Saybrook University: A Humanistic, Mind-Body Perspective
Graduate students now encounter artificial intelligence everywhere: at work, in research workflows, and increasingly in their writing environments. AI tools promise efficiency but they also raise difficult questions about accuracy and the future of scholarly voice.
In the Mind-Body Medicine programs at Saybrook University, interim chair Luann Fortune, Ph.D., LMT, has spent the past several years studying how students actually use AI. Over time, she has seen where AI strengthens student learning and where its unexamined use starts to weaken the foundations of graduate education.
AI should sharpen critical thinking, not replace it.
Saybrook’s approach centers on one core belief: AI should sharpen critical thinking without replacing it. Learn more about how Saybrook’s Mind-Body Medicine program is responding to AI with intentional design.
Why AI Raises Uniquely Human Questions in Graduate Education and Humanistic Psychology
At its core, graduate education is about learning how to think independently, make evidence-based claims, and contribute original insight to a field. As AI becomes more embedded in graduate coursework, those goals are being quietly tested.
Dr. Fortune is clear about what’s at stake. “Writing is the process through which understanding takes shape,” she insists. When students write, they discover what they believe, refine their reasoning, and learn to stand behind their claims. When that process is outsourced to AI, even in small ways, something essential is lost.
This concern becomes sharper at the doctoral level. “We’re talking about doctoral-level students who are expected to contribute original thinking,” Dr. Fortune says. “If their language, framing, or arguments are generated by a system that does not understand meaning, only patterns, students risk losing the opportunity to develop the analytic and conceptual skills their degrees demand.”
Does that mean graduate students should avoid it entirely? Not quite. “Let’s not throw the baby out with the bathwater,” says Dr. Fortune. “We know students are using it, so our approach has been to incorporate AI intentionally, teach students how to question it, and align every use with humanistic values.”
How Saybrook University Integrates AI Into the Mind-Body Medicine Curriculum
Rather than pretending AI isn’t part of students’ academic lives, Saybrook faculty chose to meet that reality head-on by teaching students how to use these tools with care and accountability.
Because Saybrook’s Mind-Body Medicine program is research-focused, not a clinical licensure track, AI is approached as a subject of inquiry that demands the same critical rigor students bring to research, theory, and practice. Rather than asking whether AI belongs in graduate education, Saybrook asks a more demanding question: How can students learn to challenge powerful tools without surrendering their voice?
“Our goals are geared toward creating researchers and original thinkers,” explains Dr. Fortune. Because mind-body medicine is an interdisciplinary field that requires contextual thinking and ethical awareness, examining the ethical and practical implications of AI becomes valuable training.
Across the curriculum, AI assignments vary in form: some inquiry-based, some conversational, some exploratory. Here are a few ways that approach comes to life.
1. Teaching Ethical AI Use Through Responsible Citation in Graduate Research
What This Develops: Academic Responsibility and Credibility
At Saybrook, ethical AI use begins with a foundational academic practice: citation. Not as a technical requirement but as a form of responsibility.
Dr. Fortune frames scholarship as a lineage. “We’re standing on a community, a history, a legacy of scholars,” she explains. “Citation is how that lineage is honored.” When students fail to verify sources, or rely on AI-generated references without scrutiny, they break trust with the scholarly community they are entering.
Ethical citation involves:
- Cross-checking references to confirm they are real, recent, and accurately represented
- Verifying that sources actually support the claims being made
- Seeking out sources to verify claims made by AI tools
- Citing AI tools when they are applied or referenced in the research or writing process
- Taking responsibility for accuracy, interpretation, and attribution at every stage of the work
AI is known to hallucinate. In Mind-Body Medicine courses, faculty have seen how easily AI can produce references that look legitimate but are entirely fictitious. Author names are real. Article titles sound plausible. Journal formats appear correct. Without careful verification, even experienced readers can miss the deception.
Saybrook addresses this risk directly. When AI is used as part of an assignment, students are expected to cite it transparently and verify every claim it produces.
2. Teaching Graduate Students to Critique AI After Independent Intellectual Work
What This Develops: Verification and Scholarly Judgment
In several courses, students complete readings and develop their own understanding first. Only then are they asked to prompt AI with a research-based question.
When these prompts are implemented, the assignment isn’t about the AI’s response, but the student’s critical thinking about it. As Dr. Fortune puts it, “The core of the assignment was for the student to critique AI: How well did it answer this, and how close did it come to how you would answer it?”
3. Using AI to Surface Research Gaps While Preserving Scholarly Responsibility
What This Develops: Intellectual Responsibility
In some assignments, students use AI after completing their own work to identify limitations or questions they may have overlooked.
Students are then required to check any AI-supported claims against peer-reviewed sources, assess their accuracy, and decide what—if anything—belongs in their work.
“If we can use AI to see what we might have missed, to supplement the work that we do as humans, then it can be a tool,” Dr. Fortune explains. “But when we let it take over our arguments, our thinking, or our words, it becomes dangerous.”
4. Engaging AI in Structured Dialogue to Challenge Graduate Student Thinking
What This Develops: Critical Reflection and Scholarly Independence
In a spirituality-for-health course, Dr. Fortune and the instructors designed an assignment where AI entered the process only after students had completed substantial original work. Students first developed their own models or frameworks related to spirituality and health, drawing on course readings and research.
Only at the final stage did students turn to AI. They were asked to have a structured dialogue with the tool, prompting it to respond to their ideas, compare perspectives, or surface relevant themes from the literature. The task was not to adopt AI’s responses, but to interrogate them. Students examined where the AI aligned with established research, where it oversimplified complex ideas, and where it introduced claims that required verification.
This design reinforces a central principle of Saybrook’s humanistic approach: meaning-making remains a human responsibility. AI can help reveal blind spots, but it cannot decide what belongs in scholarly work.
5. Supporting Accessibility With AI Without Replacing Graduate Authorship
What This Develops: Ethical Use With Self-Awareness
Dr. Fortune also acknowledges that AI can serve as a support tool in certain contexts. In one case, a student used AI to translate academic language she struggled to understand due to dialect and language differences.
Rather than dismissing this use outright, Dr. Fortune used it as an opportunity to slow the conversation down and ask harder questions. “That’s fine and good,” she recalls saying, “but who’s going to check AI?”
AI could support comprehension, but it could not replace authorship or accountability. This moment reflects Saybrook’s commitment to educating the whole person—recognizing students’ lived experiences and access needs while still holding them responsible for judgment, verification, and ethical scholarship.
Dr. Fortune was able to help this student embrace a tool for accessibility while also using it responsibly: confirming meaning, citing AI use transparently, and taking responsibility for every claim and interpretation in their work. Ultimately, this helps graduate students understand what it means to take ownership of their scholarly voice.
When AI Gets It Wrong: Risks of Misinformation in Graduate and Doctoral Education
One of the most pressing reasons Saybrook teaches students to question AI-generated content is simple: AI gets things wrong, often convincingly.
Through multi-semester research on AI-integrated assignments, Dr. Fortune observed a troubling pattern. Students frequently noticed surface errors, but missed deeper problems:
- Fabricated references
- Misrepresented findings
- Claims that sounded scholarly but had no grounding in the literature
In one data set, 46% of students correctly identified fictional citations. Forty-four percent missed them altogether.
As AI models improved, these errors became harder to detect. “It got better at making things up,” Dr. Fortune notes. “It got closer. It was harder to discern where the embedded lie was.”
The stakes extend beyond the classroom. Graduate students carry authority, especially at the doctoral level. “If our students inadvertently contribute to misinformation, then they are actually part of the disinformation campaign,” says Dr. Fortune. “When they have a Ph.D. after their name, people are going to believe them.”
In this way, AI literacy and critical thinking become safeguards for both academic integrity and the broader communities students will serve.
Preparing Mind-Body Medicine Leaders to Evaluate AI With Humanistic Discernment
Mind-body medicine sits at the intersection of research, practice, and emerging technology. AI will continue to shape this space, from clinical tools to research synthesis to wellness applications.
Saybrook does not shy away from that future, but we insist on approaching it with care.
The Mind-Body Medicine program prepares students to evaluate technology through a humanistic lens. AI is framed not as inherently good or bad, but powerful—and therefore deserving of scrutiny.
This balance is reflected in the curriculum. Students encounter AI in multiple forms: inquiry-based assignments, conversational explorations, and even visual applications.
Across formats, they are asked the same core questions.
- What does this tool do well?
- Where does it fail?
- And what responsibility do I carry when I use it?
These questions are especially relevant for students preparing to innovate in mind-body medicine. By developing the judgment to use technology thoughtfully, students carry forward skills that will shape their careers.
Why Saybrook University’s Humanistic Approach to AI Matters for Graduate Students
Key Takeaway: Saybrook University teaches graduate students to engage AI as a tool for inquiry, not as a substitute for scholarly judgment, authorship, or ethical responsibility.
Discover how Saybrook’s Mind-Body Medicine programs bridge tradition and innovation, preparing students to engage emerging technologies with discernment and integrity. Fill out the brief form below for more information.























