Pattern Computer has published a novel explainable AI framework for mitosis detection in digital pathology in Nature: Scientific Reports on March 23, 2026. The framework achieves 96 percent fidelity between its predictions and human-readable explanations — meaning that when the system identifies a cell undergoing mitosis, its explanation of why it flagged that cell accurately reflects the features the model actually used to make the decision.
Mitosis detection — identifying cells that are actively dividing — is a critical task in cancer pathology. The rate of mitosis in tumor tissue is one of the strongest prognostic indicators for cancer aggressiveness and directly influences treatment decisions. Pathologists currently perform this assessment manually by examining stained tissue slides under high magnification, a process that is time-consuming, subjective, and prone to inter-observer variability. AI systems can automate the detection, but clinical adoption has been limited by the inability to explain why a model flagged a particular cell.
Pattern Computer’s approach integrates a high-performance deep learning detection model with a prototype-based explanation system. For each detected mitosis event, the framework identifies the most similar examples from a curated reference set and presents them alongside the detection — allowing a pathologist to see not just that the AI flagged a cell, but which known mitotic cells it considered similar and what visual features drove the similarity assessment. This prototype-based explanation is more intuitive for clinicians than gradient maps or attention visualizations, which are difficult to interpret without machine learning expertise.
The 96 percent fidelity metric addresses a known weakness in explainable AI: many explanation methods generate post-hoc rationalizations that don’t accurately reflect the model’s actual decision process. A model might correctly identify a mitotic cell but generate an explanation highlighting irrelevant features, creating a false sense of understanding. Pattern Computer’s framework constrains the explanation mechanism to track the model’s actual reasoning, ensuring that the explanations are faithful rather than merely plausible.
The publication in Nature: Scientific Reports provides peer-reviewed validation of the approach, which is necessary for clinical deployment in regulated healthcare environments. Medical AI systems used for diagnostic support require evidence that their explanations are reliable — not just that their predictions are accurate — because clinicians make treatment decisions based on both the AI’s recommendation and its reasoning. A system that is accurate but unexplainable is less useful in clinical practice than one that is both accurate and transparently interpretable.
