ANALYSIS

A Son Used Claude and NotebookLM to Catch 3 Cancer Misdiagnoses — His AI Workflow Saved Her Life

E Elena Volkov Apr 10, 2026 7 min read
Engine Score 8/10 — Important

This story presents a highly impactful and actionable AI workflow that could revolutionize patient advocacy and medical error detection. Despite the event being set in the near future (2026), its novelty and potential for widespread application make it a significant development in AI's practical use.

Editorial illustration for: A Son Used Claude and NotebookLM to Catch 3 Cancer Misdiagnoses — His AI Workflow Saved Her Life

Pratik Desai, a 34-year-old technologist, built an ai cancer caregiver workflow in early 2026 that caught three separate CAT scan misdiagnoses in his mother’s Stage 4 duodenal adenocarcinoma case. Using daily exports from Epic’s MyChart patient portal fed into Google’s NotebookLM and Anthropic’s Claude, Desai identified a missed lymph node progression, a staging error on a liver lesion, and a scan comparison that referenced the wrong baseline — each triggering a direct clinical intervention. His mother is alive. Her treatment plan changed three times because of what AI found in her own medical records.

The workflow took 25 minutes a day. The question it raises is why institutional medicine requires a technically skilled caregiver to build what should be a standard quality check.

Stage 4, Rare Cancer, and a System Not Built for This

Duodenal adenocarcinoma accounts for fewer than 0.3% of all gastrointestinal cancers, with an estimated 400–500 new diagnoses annually in the United States, according to the National Cancer Institute’s SEER database. Stage 4 means metastatic spread — in Desai’s mother’s case, to the peritoneum and liver. Most oncologists see fewer than five cases per career.

Rarity compounds every downstream problem. Radiology teams read scans trained on pattern recognition calibrated for common cancers. Treatment protocols lag behind the evidence for ultra-rare presentations. Clinical notes accumulate across departments that don’t share software systems, even within the same hospital network.

Desai didn’t attempt to replace her care team. He built a system to make 14 months of her records comprehensible to himself, and used that comprehension to ask better questions.

The Workflow: Epic Exports, NotebookLM, and Claude as Second Opinion

The foundation is Epic MyChart’s “Download My Data” feature, which exports patient records as CCDA (Continuity of Care Document) structured PDFs and XML files. Desai downloaded the previous day’s records each morning — clinical notes, lab results, radiology reports, medication changes.

These fed into two parallel systems. NotebookLM served as the persistent indexed knowledge base: 14 months of records, prior scan reports, oncology literature PDFs, and NCCN treatment guidelines. Claude handled active analysis — comparing new reports against historical baselines, flagging language inconsistencies, and producing a prioritized question list before each physician visit.

Desai used Claude’s Projects feature to maintain persistent context across sessions, storing a running case summary, known medication sensitivities, and a log of every prior AI-flagged concern. The entire workflow cost approximately $22/month — compared to $150–$400/hour for a professional medical advocate service.

The Three Interventions — What Claude and NotebookLM Actually Caught

The AI identified discrepancies through language and measurement analysis, not image interpretation. Desai fed the text of radiology reports into Claude — not DICOM imaging files, which no current consumer AI tool interprets at clinical diagnostic grade. The catches were about what reports said relative to what prior reports had said.

Intervention 1 — The Missed Lymph Node Progression (October 2025)

A radiology report described a paraaortic lymph node as “stable.” Claude, comparing measurements across 14 months of records, flagged that the node had measured 8mm in July 2025, 11mm in September, and 13mm in October — a 62% size increase over three months. The “stable” classification appeared because the radiologist compared only to the immediately preceding scan, not the longitudinal timeline. Desai brought a printed measurement table to the next appointment. The oncologist ordered a biopsy. It confirmed active progression.

Intervention 2 — The Liver Lesion Staging Error (December 2025)

A December 2025 report classified a liver lesion as “likely cystic, benign.” NotebookLM surfaced a note from the original July 2024 staging workup describing the same lesion as “indeterminate; metastatic cannot be excluded.” Claude generated a side-by-side comparison of the two characterizations across the 17-month gap. An MRI followed. The lesion was confirmed metastatic, and the treatment protocol was revised entirely.

Intervention 3 — The Wrong-Scan Comparison (February 2026)

A chest CT report referenced a “prior study from six months ago.” Claude flagged that the accession number cited in the report matched a scan from 18 months prior, not six. The radiology team had pulled the wrong baseline. A growing pleural effusion had been underreported across multiple reads as a result. Desai’s notification to the clinical team produced a same-week thoracentesis.

Why Radiology Reports Are Vulnerable to AI Cancer Caregiver Detection

Radiologists in the United States read an average of 20,000 images per year, according to a 2023 analysis in the Journal of the American College of Radiology — roughly one image every three to four seconds during active reading sessions. Studies consistently document error rates of 20–30% across radiology subspecialties, with most errors classified as omission errors: findings present in the imaging record but absent from the report.

The failure mode Desai systematically exploited is comparison anchoring: radiologists default to the most recent prior scan as the reference, not the most clinically relevant baseline. For a Stage 4 patient with 18 months of imaging history, the most recent scan is frequently the wrong reference point.

Human working memory degrades across a full reading session with 40 patients. AI working memory doesn’t drift.

The Epic Export Process — Step by Step

Any caregiver can replicate this workflow with access to a hospital system running Epic, which serves approximately 250 million patients across U.S. health systems. The starting point is caregiver portal access, which requires patient authorization.

  1. Get caregiver proxy access: Request proxy access through the patient’s MyChart account. Epic systems at over 800 U.S. hospitals support this feature under “Share My Record” or “Proxy Access” settings.
  2. Export records daily: Navigate to Health → Download My Data. Export as CCDA PDF for readability with AI tools; XML for structured data queries.
  3. Sanitize filenames before uploading: Remove the patient’s name and date of birth from document filenames. The content uploads; filename metadata should be cleaned to reduce identifiability.
  4. Build the NotebookLM knowledge base: Create a dedicated notebook for the patient. Upload all historical reports, lab summaries, and relevant oncology literature. NotebookLM indexes the full corpus for natural-language queries across the entire document set.
  5. Run Claude analysis on each new report: Paste new reports into Claude with a standard prompt: “Compare this to prior reports in the context. Flag measurement changes, classification changes, and comparison reference discrepancies.”
  6. Generate appointment questions: Ask Claude to produce a prioritized question list before each physician visit, ranked by clinical urgency based on all flagged items.

Privacy, HIPAA, and What You’re Actually Agreeing To

HIPAA restricts covered entities — hospitals, insurers, providers — from disclosing records without patient authorization. It does not restrict patients or authorized caregivers from sharing their own records with third-party tools. A patient directing their own data to an AI system is legally permitted under current U.S. law.

The downstream commercial privacy risk is real regardless. Anthropic’s Claude API tier and Google’s NotebookLM with a Google Workspace account both carry stronger data handling commitments than free consumer versions — including protections against training data use. Desai used both paid tiers and substituted a case identifier for his mother’s name throughout all uploads. For caregivers handling sensitive oncology records, these are non-optional precautions.

What AI Cannot Do — The Limits Desai Enforced

Desai explicitly avoided asking AI to diagnose or recommend treatment. The workflow surfaced information; physicians acted on it. All three interventions were ordered by oncologists and radiologists — AI changed the information those clinicians had access to, not the judgment they applied to it.

Consumer AI tools cannot interpret DICOM files at clinical diagnostic grade. Desai’s entire workflow operated on text: report language, numeric measurements, accession number references, clinical note content. The gap between text-based discrepancy detection and image-based clinical diagnosis remains significant and unaddressed by this approach.

The broader question of where human judgment ends and AI capability begins in high-stakes decisions is being answered pragmatically, case by case. Desai’s answer is empirical: AI extends the information available to human experts, and that extension can be decisive.

A Replicable Template for Complex Case Management

The Desai workflow is not specific to oncology. Any caregiver managing a rare autoimmune disease, a complex post-surgical case, or a multi-drug regimen across multiple specialists can apply the same structure.

Step Tool Purpose Daily Time
Record export Epic MyChart / patient portal Data capture 5 min
Historical indexing NotebookLM Persistent knowledge base 5 min (ongoing)
Discrepancy analysis Claude Projects Flag measurement and language errors 10 min
Question generation Claude Appointment preparation 5 min

MegaOne AI tracks 139+ AI tools across 17 categories. Claude and NotebookLM rank among a small set that have demonstrated genuine utility in unstructured, high-stakes professional contexts — not because they are infallible, but because they hold context longer and more consistently than any human can across 14 months of dense clinical documentation.

AI is already embedded in routine professional decisions across sectors — from real-time weather modeling to financial risk scoring. Healthcare, where accuracy failures cost lives, is arriving last.

Desai’s workflow doesn’t require exceptional technical skill. It requires access to a patient portal, two AI subscriptions totaling $22/month, and the discipline to run a 25-minute daily process. The three interventions it produced were not a product of AI outperforming oncologists. They were a product of AI holding 14 months of records simultaneously — without anchoring, drifting, or forgetting. For any caregiver managing a complex diagnosis, that’s not a capability that should require a technologist to build from scratch.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime