With research revealing that only 22% of Americans maintain a written record of their end-of-life wishes, the team at OSF HealthCare in Illinois is utilizing artificial intelligence to assist physicians in identifying patients who are at a higher risk of death during their hospital stay. According to a press release from OSF, the team has developed an AI model that can predict a patient’s likelihood of dying within five to 90 days after being admitted to the hospital. The ultimate goal of this initiative is to enable clinicians to engage in critical end-of-life discussions with these patients.
Dr. Jonathan Handler, the lead study author and senior fellow of innovation at OSF HealthCare, expressed the organization’s objective of having advanced care planning discussions documented for every patient, ensuring that their desired care can be delivered, particularly during the sensitive period of the end of life when communication may be challenging due to the patient’s clinical condition. The aim is to prevent situations in which patients are unable to convey their preferences, such as when they are unconscious or on a ventilator. By predicting mortality, the AI model could potentially avoid patients missing out on the benefits of hospice care if their plans were documented earlier.
To instill a sense of urgency, the researchers decided to begin the model at five days and end it at 90 days, considering that the average hospital stay is typically four days. The AI model was tested on a dataset comprising over 75,000 patients from various races, ethnicities, genders, and socioeconomic backgrounds. The recently published research in the Journal of Medical Systems revealed that the mortality rate among all patients was one in 12. However, for those flagged by the AI model as having a higher likelihood of dying during their hospital stay, the mortality rate increased to one in four – three times higher than the average.
The model underwent testing both before and during the COVID-19 pandemic, yielding nearly identical results. Dr. Handler highlighted that the mortality predictor was trained on 13 different types of patient information, including clinical trends, organ function, healthcare visitation patterns, intensity of visits, and age. By leveraging this information, the AI makes predictions about the likelihood of a patient dying within a specific timeframe.
The AI model provides physicians with a probability or “confidence level,” as well as an explanation of why the patient has an elevated risk of death. Dr. Handler emphasized that the AI model condenses a vast amount of information that would take clinicians a considerable amount of time to gather, analyze, and summarize. It presents this information, along with the prediction, to facilitate decision-making.
Dr. Handler drew inspiration from a similar AI model developed at NYU Langone. However, he acknowledged that OSF’s population is different from NYU’s, prompting the team to employ a different type of predictor to achieve the desired performance. Acknowledging that the model isn’t flawless, Dr. Handler pointed out that even if it identifies an increased risk of mortality, it does not guarantee that the patient will die. The main goal is to prompt clinicians to engage in conversation and ensure that patients receive end-of-life care tailored to their needs and preferences.
OSF HealthCare has integrated the AI tool into its workflow to support clinicians seamlessly. The team is currently optimizing the tool to maximize impact and foster meaningful patient-clinician interactions.
While recognizing the potential benefits of OSF’s model, Dr. Harvey Castro, a board-certified emergency medicine physician in Dallas, Texas, and an AI expert, flagged potential risks and limitations. These include the possibility of false positives, which could cause unnecessary distress for patients and their families if the AI incorrectly predicts a high risk of mortality. False negatives are also a concern, as failure to identify patients with a high risk of mortality could result in delayed or missed end-of-life discussions. Dr. Castro emphasized the importance of combining AI predictions with compassionate human interaction, as end-of-life discussions can deeply affect patients psychologically.
Other risks identified by Dr. Castro include over-reliance on AI, data privacy concerns, and potential bias if the model is based on limited datasets, leading to disparities in care recommendations for certain patient groups. He stressed the need for continuous monitoring and feedback to ensure the accuracy and benefits of such models in real-world scenarios. Ethical considerations regarding AI’s role in healthcare, particularly in life and death predictions, are of paramount importance.
In conclusion, OSF HealthCare’s AI model serves to assist physicians in identifying patients at a higher risk of death during their hospital stay. The goal is to facilitate critical end-of-life discussions and ensure that patients receive the care that aligns with their wishes. Ethical considerations, combined with human interaction, continuous monitoring, and feedback, are vital to the responsible deployment and optimization of AI models in healthcare.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.