ClinAssist
Research in Progress · 2026

AI that clinicians
can actually trust

ClinAssist is an explainable AI clinical decision support system designed for emergency care — combining real-time risk stratification with transparent, SHAP-powered reasoning that clinicians can interrogate, trust, and act on.

See How It Works Research Background
3–5min
Avg. triage window
>80%
AUC in AI triage models
55%
Non-English speakers in SW Sydney EDs
0
Black boxes. Ever.

Powerful models exist. Clinical trust doesn't.

"The gap isn't the algorithm. It's the black box — and the clinician who can't afford to trust what they can't understand."
⚠️

Black-box predictions go unused

Clinicians routinely override or ignore AI recommendations they cannot interrogate — not out of stubbornness, but rational caution.

🧩

No integration with clinical workflow

Most AI models are benchmarked on datasets in isolation. They have never been designed around a three-to-five minute triage interaction.

📊

Structured and unstructured data are siloed

EHR data — vitals, labs, history — and free-text clinical notes are rarely fused. ClinAssist brings them together.

ClinAssist — Patient Risk Dashboard · ED Bay 3 · Live
Risk Score
87/100
⬆ High acuity — immediate review
Predicted Triage
ATS 2
Emergency — seen within 10 min
Model Confidence
94%
High — 847 similar cases
Why this score? — SHAP Explanations
Feature contribution to risk prediction
O₂ Saturation (91%)
+0.42
Systolic BP (88 mmHg)
+0.31
Age (72)
+0.22
Arrival Mode (ambulance)
+0.18
No prior ED visits
−0.09
Heart Rate (last 5 min)
112 bpm · tachycardic

From patient data to explainable clinical insight

ClinAssist processes both structured EHR data and unstructured clinical notes in real time — then tells the clinician exactly why it reached its conclusion.

01
📋

Ingest Patient Data

Structured EHR data — vitals, labs, demographics, comorbidities — are pulled automatically at point of registration. Clinical notes are captured via NLP.

02
🧠

Dual-Model Prediction

A transformer-based NLP model processes free-text notes while a gradient-boosted model handles structured data. Both feed into a unified risk score.

03
🔍

SHAP Explanation Layer

Every prediction is explained in real time. The clinician sees exactly which features drove the score — and by how much. No black boxes, ever.

Built for the realities of emergency care

Explainability

SHAP-Powered Transparency

Every prediction comes with a feature-level explanation. Clinicians can interrogate, challenge, and override — supported by clear reasoning, not blind outputs.

NLP

Clinical Notes Understanding

Transformer-based NLP reads triage and clinical notes in real time, extracting structured insight from unstructured language including medical shorthand.

Risk Stratification

Real-Time Acuity Scoring

Risk scores update dynamically as new data arrives — vitals changes, new labs, updated notes — keeping the clinician's picture current throughout the encounter.

Integration

Workflow-First Design

Designed around the 3–5 minute triage window. ClinAssist surfaces the right information at the right moment — without adding cognitive burden.

Validation

MIMIC-IV Trained

Initial models are trained and validated on MIMIC-IV — one of the world's largest critical care EHR datasets — before progressing to partner institution pilots.

Trust

Human-Factors Validated

Clinician trust and decision accuracy are primary outcome measures — not just model AUC. We measure whether ClinAssist actually improves decisions, not just predictions.

Grounded in evidence. Built for the real world.

What the literature tells us

ML models consistently achieve AUC >0.80 for high-acuity ED outcomes including ICU transfer and hospital admission.
NLP on free-text triage notes significantly improves classification accuracy over structured data alone.
XGBoost and gradient boosting outperform simpler models across diverse ED populations in recent systematic reviews.
Clinician trust remains the primary barrier to adoption — not model performance.

Open Dataset

MIMIC-IV — Beth Israel Deaconess Medical Centre. 300,000+ ICU and ED admissions. Gold standard for clinical AI research.

Built at the intersection of ML and medicine

SV

Sri Voruganti

Lead Researcher & Developer

CS (ML track) · Deep learning, NLP, SHAP explainability · LLM fine-tuning & prompt engineering

From proposal to pilot

Phase 0 · Complete

Research Proposal & Supervisor Engagement

ClinAssist proposal developed. Research direction defined. Initial supervisor outreach underway.

1
Phase 1 · Mid 2026

MRes Enrolment & Literature Review

Systematic review of clinical AI explainability literature. Research question refinement. Dataset access confirmed.

2
Phase 2 · Late 2026

Model Development on MIMIC-IV

XGBoost + SHAP pipeline on structured EHR data. Transformer NLP on clinical notes. Unified risk scoring interface prototype.

3
Phase 3 · Early 2027

Clinician Trust Study

Human-factors evaluation with ED clinicians. Does SHAP-based explainability improve decision accuracy and clinician trust vs. no-explanation baseline?

4
Phase 4 · Mid 2027

Thesis Submission & Publication

MRes thesis submission. Target publication in JAMIA, npj Digital Medicine, or similar clinical AI venues.

Interested in ClinAssist?

Whether you're a clinician, researcher, or health institution interested in collaborating — we'd love to hear from you.

Get In Touch LinkedIn