NTHRYS
PDF

Interpretability — SHAP, LIME & Counterfactuals Training | Biostatistics & ML for Omics

Explain biomedical ML models using SHAP, LIME and counterfactuals. Learn global and local explanations, feature attributions, stability checks and how to communicate trustworthy insights for omics and clinical models.

NTHRYS >> Services >> Academic Services >> Training Programs >> Bioinformatics Training >> Biostatistics, AI/ML & Reproducible Omics Analytics

Interpretability — SHAP, LIME & Counterfactuals — Hands-on

Learn how to open the black box of biomedical and omics machine learning models. This module walks through global and local interpretability using SHAP and LIME, counterfactual explanations, stability checks, and practical reporting patterns so that domain teams can trust and act on model outputs.

Interpretability — SHAP, LIME & Counterfactuals
Help Desk · WhatsApp
Session 1
Fee: Rs 8800
Foundations of Model Interpretability
  • Why interpretability matters in biomedicine
  • regulatory and clinical expectations debugging and model trust safety and bias detection
  • Types of explanations and scopes
  • global vs local explanations model specific vs model agnostic post hoc vs intrinsic interpretability
  • Human factors in explanation design
  • audience awareness (clinician vs data scientist) cognitive load and visual choices limits of what explanations can claim
Session 2
Fee: Rs 11800
SHAP & LIME for Global & Local Explanations
  • SHAP concepts and workflows
  • Shapley value intuition tree based SHAP vs model agnostic global importance and dependence plots
  • LIME for local explanation of predictions
  • local surrogate models neighbourhood sampling choices strengths and caveats of LIME
  • Visualizing and comparing explanations
  • summary plots for feature impact per patient explanation dashboards alignment with domain knowledge
Session 3
Fee: Rs 14800
Counterfactuals, What Ifs & Sensitivity
  • Counterfactual explanation basics
  • what if style explanations plausibility and actionability constraints multiple valid counterfactuals
  • Sensitivity of explanations and robustness checks
  • stability across random seeds and folds perturbation tests on inputs detecting spurious shortcuts
  • Linking interpretability to fairness analysis
  • group wise explanations identifying systematically high risk subgroups inputs for bias mitigation workflows
Session 4
Fee: Rs 18800
Trust, Communication & Audit Trails
  • Communicating explanations to domain experts
  • storytelling with SHAP and LIME plots summary paragraphs in plain language flags and caveats in reports
  • Documentation and audit trails for explanations
  • storing explanation artefacts versioning models and explanation configs traceability for regulated use cases
  • Deliverables: interpretability report pack
  • global and local explanation figures R / Python scripts using SHAP and LIME template narrative for methods section