In this special guest feature, Corinne Stroum, Director of Product for Utilization and Cost at KenSci, discusses how healthcare’s migration to electronic medical records in the early 2010s heralded an age of promise: increased efficiency, an opportunity to understand disease and populations at scale, and a reduction in errors. Corinne is an experienced product leader in healthcare analytics, healthIT, and population health. She is a subject matter expert on the data footprint of healthcare – whether clinical or claims-based – with additional experience in patient experience surveys, socio-economic factors and social determinants, and unstructured notes. Her fluency in clinical interoperability standards, healthcare ontologies, and health policy enables her to manage interdisciplinary teams and engage with customers or thought leaders.
Healthcare’s migration to electronic medical records in the early 2010s heralded an age of promise: increased efficiency, an opportunity to understand disease and populations at scale, and a reduction in errors. Like other industries, healthcare has also faced the challenge of adopting technologies before policy and ethics have had a chance to understand its implications.
Data: It’s what fuels healthcare into the new era
Data sources in healthcare are becoming more complex and numerous. The most traditional data source – the electronic medical record – began its steepest adoption thanks to the American Reinvestment & Recovery Act (ARRA) and its mandate for Meaningful Use (Centers for Medicare & Medicaid Services, 2010). This all-but guaranteed the availability of structured data related to clinician and patient interactions: medication use, diagnosis history, lab results, and observation of vitals. Unstructured data arrived in the form of case notes and imaging interpretation. The ecosystem of consumer technology and applications has brought novel sources of data, too: patient-reported mood, physical fitness and food consumption data, recordings of location and ambient noise, and even the metrics of interaction with technology – logins, typing speed, choice of words when writing. This leads to a rich and dizzying array of data for use in healthcare applications.
Administration: A front-line for healthcare ML
The administrative burden of healthcare is one of the reasons the US lags behind other nations in investment vs. benefits. Administration is well-suited to big data techniques and machine learning: it can improve efficiency at the desk-side by scaling in more nuanced ways than robotic process automation or rules-driven workflows. An ML tool is capable of reviewing and approving batches of requests for prior authorization – healthcare provider’s advanced notice to an insurer of a specific treatment plan – at a rate impossible for humans. Trained on the behaviors of previous reviewers, the system can highlight the areas of a chart that led to its decision. Final validation rests on a clinician. This human-computer partnership draws attention to a vital need for explainability in model results: the system highlighting its source of signal is not just a convenience but an important output. The clinician reviewer may choose different actions based on the risk factors that the model highlights.
The advertising and entertainment industries have driven incredible investment into machine learning technologies: the same techniques that stratify and target consumers based on buying preferences move seamlessly to management of populations. Techniques like phenotyping and clustering help to find similar cases to highlight treatment plans or members for larger scale targeting. This scenario is most apparent when trying to understand members whose activities may not be predicated on prior behaviors. By way of an example: some members who frequently use services like the emergency room are likely to do so year-over-year. Phenotyping can help to highlight those members who make a transition from “no utilization” to “high utilization”, developing a risk profile that helps a health plan advise members, based on their characteristics, how to self-manage conditions or when to seek care of varying acuity.
Machine learning models have demonstrated success in calling attention to members whose conditions are at risk of increasing in severity (Dagliati, Marini, & Sacchi, 2017). This sets the stage for an important use case of cost. Predicting the arc of a condition that might progress towards onset of complications gives healthcare providers more time to manage root causes and counsel members on lifestyle modifications. Forecasting of these events has gone beyond simple regression to more sophisticated time-series models that can detect and retrain when their predictions stray – or drift – from the population’s behaviors.
Big data techniques in healthcare won the Wall Street Journal the Pulitzer Prize in Investigative Journalism in 2015 for its stunning identification of fraud, waste, and abuse among physicians billing Medicare (The Pulitzer Prizes, 2015). This use case highlights data science that seeks to outliers and anomalies: where are those physicians who are not behaving like his or her peers? At what point do this physician’s behaviors billing practices become disjoint with the other hundreds of thousands of Medicare providers? Detection of fraud, waste, and abuse is a use case that spans administration, utilization, and cost, and it suits the health system perspective, as well. Inefficiencies in workflows, misalignment of protocol between facilities, and lossy supply chains can benefit from these same techniques of outlier and anomaly detection. Health systems, might, for example, identify providers who are prescribing opioids at a rate disjoint with their peers given a patient cohort of a similar profile.
ML: The next frontier for healthcare Administration, Utilization and Cost
With technology changing fast and new use cases arriving daily, it is important to apply these techniques thoughtfully. While administrative use cases may not bear the same time-sensitivity as a clinical need, they carry ethical hazards. Models must be trained on a representative population and reviewed frequently for bias. Model authors, and all stakeholders in adoption, must train users on the limitations of a model ensure transparency of results: users must understand why a model makes its recommendations, or be given enough information to make the same decision. With appropriate governance, the power of machine learning brings great benefit to use cases healthcare administration, utilization, and cost.
Sign up for the free insideBIGDATA newsletter.