Tuesday, April 7, 2026
HomeUncategorizedBefore symptoms speak: How AI is closing medicine’s diagnostic gap

Before symptoms speak: How AI is closing medicine’s diagnostic gap

By Professor Mohammad Yaqub, Associate Professor of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence

Professor Mohammad Yaqub, Associate Professor of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence

According to the World Health Organization, every three seconds, somewhere in the world, someone is diagnosed with dementia. In that same moment, a fetal ultrasound scan is being reviewed by a clinician who may not have the specialist training to catch what the image is quietly revealing. From the earliest weeks of life to its final chapters, the gaps in our ability to act on what medicine already knows are costing us – in missed diagnoses, in delayed interventions, in lives diminished or cut short. These are not edge cases. They are the daily texture of global healthcare, and they point to the same underlying failure: not a shortage of scientific knowledge, but the shortage of tools to act on it.

Eight million children are born each year with congenital diseases[i]. An estimated 240,000[ii] will not survive their first 28 days, in many cases because the conditions were never detected before birth. Fifty-five million people are living with dementia globally, a number set to reach 139 million by 2050[iii], yet 75 per cent of those affected carry no formal diagnosis at all. What these numbers share is not just scale. They share a common cause: the gap between what medicine knows and what medicine delivers.

This World Health Day, observed under the theme “Together for health. Stand with science”, the question worth asking is not whether AI belongs in medicine, that debate is largely settled. The question is whether we are moving fast enough to close a gap that costs lives every day.

The diagnostic gap in maternal and fetal medicine
Of the approximately eight million newborns affected by congenital diseases, a significant proportion go undetected before birth, not because medicine does not know what to look for, but because there are not enough specialists to look. There are not enough skilled sonographers and fetal medicine specialists globally to conduct the detailed, high-quality assessments required for early detection. In many low- and middle-income countries, that expertise is simply not available. Even if there are some specialists available in a hospital, each must assess many fetuses on a daily basis, more than they can sensibly do, leaving them with little time to assess each case properly. This disparity matters, particularly given that nine out of ten children born with a serious congenital disorder are born in low- and middle-income countries[iv].

This is where large-scale AI models trained on clinical imaging data become relevant. Foundation models have transformed image analysis across many domains by learning generalisable representations from large, diverse datasets. Fetal ultrasound poses distinct challenges that make this harder than it sounds: constant fetal movement, variable image quality, and the fundamentally different anatomy of a developing body. The question is whether a model can be built that genuinely reflects these realities, rather than inheriting assumptions from adult imaging.

At the BioMedIA Lab at MBZUAI, we have been working to address this directly. FetalCLIP, developed in collaboration with Corniche Hospital in Abu Dhabi – one of the UAE’s leading hospitals for women’s and newborn care, is a foundation model built specifically for fetal ultrasound analysis. It has been trained on more than 200,000 fetal ultrasound images across a range of gestational ages and anatomical planes. What makes this scientifically meaningful is its domain specificity: the model has learned representations that reflect the biological and imaging realities of prenatal medicine, rather than inheriting assumptions from non-clinical or adult-imaging contexts. In laboratory evaluations, its ability to detect congenital cardiac anomalies approaches the performance of an experienced clinical sonographer – a benchmark that matters because it suggests genuine clinical utility, rather than statistical performance on a test set.

It is important to be precise about what this means. A model that performs comparably to an expert clinician in controlled evaluations is not a clinician. It does not replace clinical judgement, account for every presentation variant, or prove readiness for large-scale deployment across diverse real-world settings. What it represents is a scientifically credible starting point, one that is properly validated and integrated into clinical workflows, could extend specialist-level screening to the communities that need it most.

The undiagnosed burden of neurodegenerative disease
If FetalCLIP addresses the diagnostic gap at the beginning of life, our work on neurodegenerative disease addresses the same gap at its other end. The conditions are different; the structural problem is identical and the global burden is significant.

Each year, 10 million people develop dementia, with Alzheimer’s disease being the most common form of dementia and contributing to 60-70% of cases[v]. Yet the most striking feature of this epidemic is not its scale, it is how much of it goes unseen. An estimated 75 per cent of people living with dementia have no formal diagnosis at all, meaning roughly 41 million cases of dementia across the globe are undiagnosed[vi].

Part of the diagnostic challenge is clinical heterogeneity. The umbrella of dementia encompasses Alzheimer’s disease, vascular dementia, Lewy body dementia, frontotemporal dementia, among others, with conditions such as mild cognitive impairment representing prodromal stages along the disease continuum. These conditions often overlap symptomatically, especially in their early stages, and can be difficult to distinguish even with specialist assessment. Misdiagnosis between subtypes is not uncommon and the rate can be as high as 50%, with significant consequences. Different subtypes have distinct trajectories, risk factors, and management strategies. A patient misdiagnosed with Alzheimer’s when they have vascular dementia, for example, may not receive the cardiovascular risk management that could help slow their deterioration.

The deeper problem, however, is the timing of diagnosis. By the time a patient reaches clinical referral, significant neurodegeneration has typically already occurred. Research has shown that the pathological changes associated with Alzheimer’s amyloid accumulation, tau pathology, hippocampal atrophy are detectable years, sometimes a decade, before symptoms emerge. That window exists. The challenge is finding who is in it, reliably and at scale.

This is the scientific problem that NeuroPath, a platform developed within the BioMedIA Lab, seeks to address. NeuroPath provides clinicians with decision-support insights by analyzing both single-timepoint and longitudinal MRI scans to position patients along the dementia spectrum. It supports bilateral brain segmentation, automated feature extraction, and transparent, interpretable diagnostic outputs, enabling more accurate staging, earlier detection, and clearer differentiation between dementia subtypes in routine clinical workflows. While MRI serves as the primary modality currently deployed, additional models are planned to incorporate other data sources where available, including electronic health records (EHR), functional MRI, and genetic data.

In evaluations, NeuroPath has demonstrated a 98.75% accuracy in differential diagnosis across Alzheimer’s disease, vascular dementia, mild cognitive impairment, and healthy controls. Its capacity to predict the rate of progression from preclinical states to clinical Alzheimer’s disease shows a 47.9% improvement over MRI-only approaches. Crucially, the platform does not simply return a probability score. It identifies the specific brain regions, genetic markers, and clinical variables driving an individual patient’s risk profile – because a clinician who cannot understand a tool’s reasoning cannot responsibly act on it. Interpretability is not a feature; it is a prerequisite.

What “standing with science” actually means
Neither of these programmes emerged from basic research alone. FetalCLIP was developed through active collaboration with Corniche Hospital. NeuroPath draws on longitudinal cohort data and clinical partnerships, grounding it in real patient populations. This model of co-design is not optional; it is what separates tools that work in the real world from tools that perform well in papers. Tools developed without clinical co-design often encode assumptions that reduce their utility in practice. Datasets curated without meaningful representation from the populations they are intended to serve risk underrepresenting the very patients who may benefit most.

Standing with science means acknowledging that the path from algorithm to patient outcome is long, and that shortening it demands honesty about failure modes as much as celebration of results. It means building for equity from the start, not as an afterthought. The diagnostic gap in congenital disease and dementia persists, in part, because previous tools were designed for and validated in populations that did not reflect global diversity.

The science is advancing. What the UAE’s AI research community and the global community it works alongside, must ensure is that the pace of translation matches the pace of discovery. For the mother whose child’s heart defect goes undetected. For the patient whose Alzheimer’s is diagnosed only after years of silent progression. The diagnostic gap is not a research problem. It has never been just a research problem. It is a human one, and it demands a human urgency.


References

  • [i] WHO, 2024
  • [ii] WHO, 2023
  • [iii] WHO, 2021
  • [iv] WHO, 2023
  • [v] WHO, 2025
  • [vi] ADI, 2021
- Advertisment -

Most Popular