Explainability

Machine learning programs can’t explain their reasoning, but SimulConsult can. 

This example shows explainability for the possibility of a particular myopathy being the patient’s diagnosis. 

  • Patient. On the left, the patient’s pertinent positive and negative findings are displayed, with the presence or absence information, including the onset, where available.   
  • Disease. On the right, the frequency of each finding in the selected disease is displayed, with black representing the frequency now and purple representing the eventual frequency. 

An excellent fit is a lot of black for the pertinent positives and little black for the pertinent negatives.  For this patient, the picture is less clear:

  • Pertinent positives. The pertinent positive findings fit well, except for hearing impairment, which isn’t a finding in this disease.
  • Pertinent negatives. The pertinent negative findings fit decently, except that most patients would have high creatine kinase by now.

A clinician assessing this diagnosis would entertain 2 possibilities:

  1. This is not the correct diagnosis.
  2. This is the correct diagnosis, but the hearing impairment has a different cause.

Because the reasoning is explained, the clinician remains in charge.  One of the core principles known in medical informatics is that clinicians value explainability as much as they value the correct ranking of possible diagnoses.  By providing this explainability, SimulConsult adds a feature missing from machine learning approaches.