Artificial intelligence for global health
Clinical Data
Nov 24, 2019
Artificial intelligence (AI) has demonstrated great progress in the detection, diagnosis, and treatment of diseases. Deep learning, a subset of machine learning based on artificial neural networks, has enabled applications with performance levels approaching those of trained professionals in tasks including the interpretation of medical images and discovery of drug compounds (1). Not surprisingly, most AI developments in health care cater to the needs of high-income countries (HICs), where the majority of research is conducted. Conversely, little is discussed about what AI can bring to medical practice in low- and middle-income countries (LMICs), where workforce shortages and limited resources constrain the access to and quality of care. AI could play an important role in addressing global health care inequities at the individual patient, health system, and population levels. However, challenges in developing and implementing AI applications must be addressed ahead of widespread adoption and measurable impact.
Health conditions in LMICs and HICs are rapidly converging, as indicated by the recent shift of the global disease burden from infectious diseases to chronic noncommunicable diseases (NCDs, including cancer, cardiovascular disease, and diabetes) (2). Both contexts also face similar challenges, such as physician burnout due to work-related stress (3), inefficiencies in clinical workflows, inaccuracies in diagnostic tests, and increases in hospital-acquired infections. Despite these similarities, more basic needs remain unmet in LMICs, including health care workforce shortages, particularly specialist medical professionals such as surgical oncologists and cardiac care nurses. Patients often face limited access to drugs, diagnostic imaging hardware (ultrasound, x-ray), and surgical infrastructures (operating theaters, devices, anesthesia). When equipment is available, LMICs often lack the technical expertise needed to operate, maintain, and repair it. As a result, 40% of medical equipment in LMICs is out of service (4). Conditions are exacerbated in fields that require both specialized workforce and equipment. For example, delivering radiotherapy requires a team of radiation oncologists, medical physicists, dosimetrists, and radiation therapists—together with sophisticated particle accelerator equipment. Consequently, 50 to 90% of cancer patients requiring radiotherapy in LMICs lack access to this relatively affordable and effective treatment modality (5).
LMICs have undertaken substantial health care spending, saving millions of lives by improving access to clean water, vaccinations, and HIV treatments. However, changes in health care needs owing to increased mortality from complex NCDs require high-quality, longitudinal, and integrated care (6). These emerging challenges have been central to the United Nations’ Sustainable Development Goals, including the aim to reduce by one-third premature mortality from NCDs by 2030. AI has the potential to fuel and sustain efforts toward these ambitious goals.
Health care–related AI interventions in LMICs can be broadly divided into three application areas (see the figure). The first includes AI-powered low-cost tools running on smartphones or portable instruments. These mainly address common diseases and are operated by nonspecialist community health workers (CHWs) in off-site locations, including local centers and households. CHWs may use AI recommendations to triage patients and identify those requiring close follow-up. Applications include diagnosing skin cancer from photographic images and analyzing peripheral blood samples to diagnose malaria (7); more are expected given the emergence of pocket diagnostic hardware, including ultrasound probes and microscopes. With increasing smartphone penetration, patient-facing AI applications may guide lifestyle and nutrition, allow symptom self-assessment, and provide advice during pregnancy or recovery periods—ultimately allowing patients to take control of their health and reducing the burden on limited health systems.
The second application area focuses on more specialized medical needs, with the goal of supporting clinical decision-making. AI may allow nonspecialized primary care physicians to perform specialized tasks including reading diagnostic radiology and pathology images, only referring to specialists if necessary. AI tools may also help provide specialists with expert knowledge across multiple subspecialties. This is particularly important in oncology, where lack of subspecialists may force an oncologist to manage tumors across multiple anatomical sites, and thus deliver care of inferior quality owing to the constantly varying scope of services. In radiotherapy, for example, semi-automation of the treatment planning process may speed up treatment delivery, increase patient intake, and allow greater focus on the clinical nuances of patient management—all without requiring additional personnel. Although AI may not directly address diagnostic and therapeutic equipment shortages, AI integration into equipment design may help nontechnical operators troubleshoot issues when technicians are scarce. By analyzing historical maintenance data, AI may also help sustain long-term operations, predict failures, and avoid delay on parts and consumables.
The third application area relates to population health and allows public agencies to realize cause-and-effect relationships, appropriately allocate the often limited resources, and ultimately mitigate the progression of epidemics (8). Improving data collection in LMICs is central to these applications. For example, AI may help maintain up-to-date national cancer registries. Automated registry curation, by extracting standard data from free-form text found in radiology and pathology reports, may help reduce labor costs that account for more than 50% of all registry activity expenses (9). Other applications include identifying hotspots for potential disease outbreaks in unmapped rural areas by utilizing AI-powered analysis of aerial photography and weather patterns, as well as planning and optimizing CHWs’ household visiting schedules. Although these applications may prompt immediate actionable interventions, their translation into effective long-term health policies remains unclear.
HIC-based AI applications in health care are far from perfect. Most are at the proof-of-concept stage and require further demonstration of utility through clinical validation in prospective trials. The underlying methods are often uninterpretable, making it difficult to predict failures and critically assess results. Data used to train AI models are almost entirely collected within HICs, and models are hence skewed toward certain diseases, demographics, and geographies. With varying degrees of statistical data analysis and quality control, errors and systematic biases are introduced into models, thereby limiting their generalizability, especially when deployed in different contexts. Ethical concerns about the use of AI in health care include undermining patient data privacy protections and exacerbating the existing tension between providing care and generating profit, as well as introducing a third party into the patient-doctor relationship, which changes expectations of confidentiality and responsibility (10). From a regulatory perspective, medical malpractice and liabilities in health-related algorithmic decision-making are yet to be formulated. Nearly all AI tools in health care are single-task applications, and so they are incapable of fully substituting for health professionals. Understanding these limitations may help avoid hype and inflated expectations.
Introducing AI tools in resource-constrained settings presents additional challenges. The distinct needs, diseases, demographics, and standards of care in LMICs must be acknowledged through identifying specific use cases where AI involvement would have the greatest impact. Data for AI training and validation must be context specific: Computer vision systems may be required to work with legacy data formats (e.g., film versus digital x-ray), whereas developing chatbots will require compiling corpora in local languages. Solutions must also be context specific. For example, an automated system should not recommend treatments that are unavailable locally or are prohibitively expensive. Moreover, human factors should be considered: What levels of skill, education, and computer literacy are required of end users? The amount of behavioral change needed to raise awareness and confidence in AI systems should also be addressed, enabling users to recognize limitations and accurately interpret results. Infrastructure constraints should be assessed, including the availability of devices for serving AI applications, reliability of internet connectivity and bandwidth, electricity, and the amount and quality of existing digital data, as well as future digitization efforts.
Multiple digital initiatives have been proposed to enhance access to and quality of health care in LMICs. These include technologies to support health care practices using electronic processes (eHealth) and remote telecommunications (Telehealth), an example of which is mobile health (mHealth) using mobile phones and tablets. Best practices for scaling these initiatives in LMICs have been established on the basis of real-world experiences, including the World Health Organization’s mHealth Assessment and Planning for Scale (MAPS) Toolkit (11). These efforts could provide learning opportunities for similar digital AI applications. Many of the challenges faced by integrating electronic medical records in LMICs, for example, are likely to also impede AI applications, including limited funding, poor infrastructure for reliably delivering technologies, and discontinuous participation from users (12). Integration opportunities could also be considered: An existing mHealth application for patient-physician remote communication can be enhanced with an AI chatbot to triage patients prior to the consultation.
There is skepticism about the value of introducing AI in LMICs given the need to prioritize investments in basic infrastructure (13). AI-driven interventions should not be evaluated in isolation, nor should they be regarded as a universal panacea: Although sizable initial investments may be required, the marginal cost of providing an existing AI software service to one more user is minuscule, giving it economical scalability. An AI application may also use the deployment channels of existing digital technologies, making it almost readily deployable.
Ultimately, AI interventions in LMICs should be initiated, owned, and administered by local stakeholders—with HICs providing funding, expertise, and advice when needed. AI literacy may be included in existing global health educational programs to raise awareness about its capabilities and pitfalls. Empowering local technical AI talent will also be crucial, and may be accelerated through high-quality free educational online resources. AI implementation will require rethinking existing regulatory frameworks. For example, the training and scope of practice of CHWs may be expanded to include screening and diagnosing NCDs (14). Investment areas critical to bringing AI into LMICs must also be identified, as well as gathering evidence on the impact of AI solutions (15). Uneven distribution of access to technologies has created a digital divide between the rich and poor, while contributing to existing global inequalities. AI could emerge as a socially responsible technology with inherent equity.
This is an article distributed under the terms of the Science Journals Default License.