December 01, 2020 – Data access and industry standards may help leaders eliminate potential bias in healthcare artificial intelligence tools, as well as improve implementation of the technology, according to a report from the US Government Accountability Office (GAO).
GAO noted that artificial intelligence has many possible uses in healthcare, in both the clinical and administrative areas of the industry.
“Developers have demonstrated AI tools in a number of clinical applications, such as supporting clinical decision-making. These tools are at varying stages of maturity and adoption, but with the exception of population health management tools, many have not achieved widespread use,” the report stated.
“Use of AI tools for administrative applications could also affect patient care, including by reducing provider burden, and are also at varying stages of maturity and adoption, ranging from emerging to widespread.”
While the potential of AI has been well-documented and demonstrated by researchers and developers alike, the technology could also bring significant challenges to care delivery. Concerns over data access, bias, transparency, and integration have hindered the use of AI in healthcare, and will continue to do so until these issues are addressed.
GAO assessed available and developing healthcare AI tools and developed policy options that could help address the challenges or enhance the benefits of these technologies. Data accessibility was one of the top challenges the agency identified in its analysis.
“Accessing sufficient high-quality data to develop AI tools is a significant challenge—so much so that it can be considered one of the most important factors when deciding what tools to develop,” GAO stated.
“Data are integral to all phases of AI tool development and deployment. Large quantities of high-quality data are needed to train, tune, evaluate, and validate AI models.”
GAO suggested that leaders utilize innovative technologies to increase data access and sharing among healthcare organizations.
“Policymakers could consider increasing data access by creating a type of mechanism known as a data commons–a cloud-based platform where users can store, share, access, and interact with data and other digital objects,” the report said.
“For example, the Stanford Institute for Human-Centered Artificial Intelligence proposed a National Research Cloud, which would be a partnership between academia, government, and industry to provide access to resources, potentially including a large-scale, government-held data set in a secure cloud environment to develop and train AI.”
Broader availability of health information will help overcome issues of bias and equity in AI algorithm development.
“Increasing access to high-quality data could help developers address bias concerns by ensuring data are representative, transparent, and equitable. A common platform would allow people to test and validate their algorithms across multiple health systems or data sets. The replication of outputs in multiple situations could prevent the introduction of bias into the algorithm as it is being tested and validated,” GAO said.
“Enhanced data sharing can also mitigate bias by ensuring open access to the data so developers and providers can assess how the AI was trained and tested.”
In addition to data access, issues with scaling and integration in healthcare organizations can also hinder AI use. Differences among institutions and the patient populations they serve can make it challenging to widely apply and implement these tools.
“Population differences can make it difficult to scale and integrate AI tools for the same reasons that they can introduce bias: tools developed with non-representative data may not be generalizable,” GAO wrote.
“Similarly, institutional differences can make scaling and integration difficult because AI tools developed in one setting, such as at a high-resource hospital, may make recommendations that are inappropriate in another, such as a low resource hospital.”
Defined standards of data sharing and collection could help address this issue, GAO said.
“Best practices could improve scalability of AI tools by enhancing interoperability. As discussed above, scaling AI tools can be difficult because of challenges related to retraining models using differently formatted data, among other things. Standards such as those identified by the Interoperability Standards Advisory (ISA) may be able to help address this challenge,” the report stated.
“Standing working groups or committees could identify the areas in which best practices would be most beneficial, develop, and periodically update best practices to help ensure they remain current and relevant. Meetings could occur with representatives from academia, patient and physician advocacy groups, industry, and the federal government, among other entities.”
Interdisciplinary collaborations could also help facilitate easier implementation and application of AI models in healthcare.
“Early and consistent collaboration could help developers design AI tools that are easier to implement and use within providers’ existing workflow and associated constraints,” GAO wrote.
“According to one provider organization, providers are only seen as the end-user of the product. However, they can also contribute to product design because they have useful information on how the products may affect their workflow and the patient experience, as well as insight on how to best design the tools to be easily implementable.”
With these policy options, GAO expects that healthcare leaders can address barriers to AI use and enhance the value of these tools.
“The US healthcare system is under pressure from an aging population; rising disease prevalence, including from the current pandemic; and increasing costs. New technologies, such as AI, could augment patient care in healthcare facilities, including outpatient and inpatient care, emergency services, and preventative care,” GAO concluded.