
Dr Zaid Al-Fagih, Co-Founder and CEO of Rhazes AI, examines why low-resource healthcare environments – particularly those rebuilding after conflict – may offer the most fertile ground for deploying AI-native health systems. Drawing on a pilot programme at a hospital in Southern Lebanon, he argues that the absence of legacy infrastructure, if paired with robust governance frameworks, could allow these settings to leapfrog the integration challenges that continue to impede AI adoption across developed-world healthcare systems.
The settings where AI could have the most immediate impact are not high-tech hospitals with multi-billion-dollar public budgets, but clinics rebuilding services after conflict. This includes environments with rudimentary technology, limited digital infrastructure for clinicians, and constrained access to basic resources such as electricity and water.
It sounds daunting, but this can be the ideal context for healthcare AI. Low-resource environments offer something advanced systems often cannot: the freedom to build an AI-native healthcare system from the ground up. An AI-native system is not one that retrofits tools into legacy infrastructure; it is one in which AI is designed in from inception and embedded in documentation, diagnostics and governance.
Currently, attention is largely focused on deploying AI tools into well-established health systems with large budgets and state-of-the-art hospitals. However, despite the success stories, implementing AI solutions in developed countries can be extremely difficult. Providers face outdated regulations, complex hierarchies, and infrastructure that’s incompatible with their tools. A recent survey revealed that nearly half of hospitals in the U.S. aren’t ready to implement AI at scale (Guidehouse).
Integrating AI at scale
Western healthcare systems were important early adopters of AI, but integrating it at scale has proven far more complex than anticipated, largely because AI must coexist with entrenched legacy systems. Imagine if those same tools had been launched without outdated regulations and legacy information technology systems, and, better still, with AI embedded from start to finish. This is why the best place to build AI-native health environments is also where they are needed most.
Last year, our team launched an AI pilot at Al Hamshari Hospital in Southern Lebanon (Middle East Health). Despite its modest size, the facility serves a huge district that includes one of the country’s biggest refugee camps. Each month, about 4,000 patients are treated at the hospital, which has just 80 beds, 56 doctors and 31 nurses. We deployed tools to help doctors write their notes, get help with clinical decisions, and access evidence-based guidelines.
Resources at the hospital are extremely limited, with one computer per floor and no electronic patient record system. Many tasks are still carried out in a rudimentary way, with clinical notes and decision-support materials handled on paper. Crucially, however, this is the kind of environment where the biggest marginal gains can be made. Even small improvements in documentation quality and clinical support can go a long way towards meaningful gains in capacity.
Learning from low-resource AI implementations
Healthcare providers in developed nations also have much to learn from low-resource AI implementations. They show what it looks like to design health systems around AI, with strong governance built in from day one. Instead of an optional add-on, AI becomes part of the foundation, extending clinical capacity, digitising patient records, and providing diagnostic support.
Most importantly, citizens stand to gain the most. AI investment can stimulate economic growth, modernise healthcare processes, and allow clinicians to spend more time treating patients. Low-resource systems, particularly those rebuilding after conflict, also have the advantage of writing procurement rules, data governance standards, and interoperability requirements with AI in mind from the outset, rather than retrofitting them years later.
Syria is emerging from more than a decade of civil war, and in many areas, births and deaths are under-recorded because documentation remains fragmented and paper-based. Health systems that rely on paper records risk medical errors, information gaps, and the loss of confidentiality through misplaced or unauthorised access to files.
Agentic AI
Agentic AI, meaning systems that can take limited, supervised actions on a clinician’s behalf, could enable doctors to transcribe consultations digitally rather than taking notes by hand. This is just one route to transforming general practice in Syria. AI could also support care in areas such as surgery, pharmacology and paediatrics. But to avoid the compliance challenges faced by Western systems, trust, safety and sovereignty must be built in from day one.
An AI-built system will only work if patients and health workers trust it. The risks are real: biased models, unclear liability when decision-support is wrong, and vendor lock-in that quietly shifts control of national health data offshore. This is where low-resource settings can have an advantage over legacy systems, because they can bake in governance from the start.
Collaboration required to develop a responsible framework
That advantage depends on strong guardrails. Without them, the effort risks becoming an exercise in exporting solutions, rather than building systems that genuinely serve the communities that rely on them. This is why the public and private sectors should collaborate on a framework for the responsible, scalable deployment of clinical AI in low-resource settings.
National health authorities, local clinicians, patient representatives and technology providers should all have seats at the table. The World Health Organisation could provide convening power for such a body, given its role in setting global health standards.
The first pillar of the framework should address data governance and sovereignty. Patients’ data should remain under national jurisdiction, with clear rules for consent, access and secondary use. Procurement contracts should prohibit any data extraction that is not explicitly authorised.
The second pillar concerns clinical safety and accountability. Every model should have a defined intended use, clear performance requirements and ongoing monitoring. Clinical responsibility must remain with licensed professionals, and systems must fail safely when confidence is low.
The third pillar targets interoperability and competition. Countries should require open standards and exportable data so they can switch vendors without losing records or becoming dependent on a single supplier.
Crucially, the framework must be co-designed with low-resource countries, not written for them. Otherwise, it risks becoming an export project driven by vendor incentives rather than local priorities.
Nearly everything we know about AI in healthcare comes from developed nations. High-resource environments were the proving ground, but the next growth phase should happen in health systems where standards lag the global average. The next generation of health systems will not just be AI-enabled; they will be AI-native, and many will emerge where the need is greatest.
About the author
Dr Zaid Al-Fagih is the Co-Founder and CEO of Rhazes AI, an award-winning AI-powered virtual assistant transforming the way doctors work. Prior to founding Rhazes AI, Dr Al-Fagih practised full-time as a medical doctor in the NHS and was a voluntary first responder and first aid trainer on multiple humanitarian missions during the Syrian conflict. Dr Al-Fagih holds a Bachelor of Medicine, Bachelor of Surgery (MBBS) and an intercalated BSc in Management from Imperial College London, and a Master of Public Policy from the University of Oxford, and has since added certifications in machine learning, deep learning, and product management.




