Stanford Internal Medicine · GME
Use AI like a clinician — not like a copy-paste machine.
Six hands-on modules for IM residents and fellows. You'll start with ethics and responsible use, then compare models, iterate prompts, red-team for failure modes, learn skill-preserving habits, and ship a research figure — all with feedback from an AI tutor.
Sign in
Your name and module activity are logged so course directors can see how the cohort is using the curriculum.
What you'll work through
- 01Ethics & Responsible Use · 15 min
PHI handling, accountability, disclosure, equity — the principles that thread through every other module.
- 02Models · 15 min
Generations, tiers, fast vs thinking, context windows — and which models you can actually use at Stanford.
- 03Prompting & Context · 20 min
What separates a vague prompt from an expert one — practiced in an iterative arena with live feedback.
- 04Failure Modes · 15 min
Hallucination, sycophancy, context collapse — red-team a model and watch them appear.
- 05Deskilling Risk · 20 min
Three skill-preserving prompt patterns that keep your clinical reasoning sharp while AI helps.
- 06AI for Research · 25 min
From research question through literature review, manuscript editing, and a generated central figure.
Why this exists. A 2025 Stanford IM needs assessment found 91% of residents had used at least one AI tool clinically — but mean self-rated AI competence was 2.6 / 5. Access without competence is a patient-safety risk. This curriculum closes that gap.