Artificial intelligence already has a pretty impressive track record in health care, reportedly outperforming doctors at diagnosing heart disease and lung cancer and analyzing genetic sequencing data in 10 minutes that would take a team of human researchers about a week to sort through.
But there's more to good health care than just crunching numbers and finding patterns in reams of data, as anyone who’s ever been comforted or heartened by a good nurse or doctor can attest.
Hippocrates knew it back in the fourth century B.C., prescribing a bedside manner for doctors that included grooming recommendations, a calm sense of gravitas, advice to “hold his head humbly and evenly” and even noting he should be “very chaste, sober, not a winebibber.” The idea was that treating a patient isn’t limited to symptoms, and medical professionals since have known the importance of putting the “care” in health care.
AI, for its part, can’t offer a soothing, knowing smile or strike an inspirational doctor pose, but it can do things beyond running algorithms on cancer cells to make recovery better for patients.
Making the Rounds
In one example, El Camino Hospital in California’s Silicon Valley used an AI program to reduce the number of severe falls suffered by recovering patients. The program collects data from electronic health records and real-time tracking of patients, and can alert a nurse that a patient is at risk of falling, U.S. News reported. Six months after installing the software, the incidence of dangerous falls by patients dropped 39 percent.
Cerner, the company whose technology is being used to produce an electronic health records system for the departments of Defense and Veterans Affairs, uses AI in its St. John’s Sepsis Agent, to detect early signs of sepsis. A Cloudera platform in use at a national children’s hospital collects terabytes of data on respiration, heart rate and other data to determine the effect of noise and light levels on infants, as a way of improving their care. Another company, DocBox, lets nurses spend more time caring for patients in intensive care by taking over their data-collection duties, tapping into internet-connected devices to automatically gather and analyze all of a patient’s vital signs and waveforms, which in an ICU can come from up to 300 sources.
Other AI systems also are used for such things as scheduling the optimal times for surgeries and image tests, reducing wait times and predicting which patients are most likely to suffer a recurrence of symptoms.
Making the Case
For all of its progress, AI still faces some hurdles to more widespread adoption. One is financial, concerning whether systems can be cost effective. Another involves worries about AI becoming too autonomous.
A 2017 survey by Healthcare IT News and HIMSS Analytics found the two biggest barriers to AI adoption were the technology is still developing and the difficulty of making a business case for it, the latter reflecting the problems some institutions have had with getting an acceptable return on investment in AI.
Another concern is about letting automated programs take over too much of the load, and to get back to Hippocrates, possibly cross the “do no harm” threshold. What happens if they start operating, so to speak, on their own?
The Food and Drug Administration in February approved AI software to help detect signs of a potential stroke, and last month approved the first AI-powered diagnostic test that doesn’t require a doctor to interpret the results — in this case, an ophthalmology device to detect diabetic retinopathy, which can result in vision loss. Doctors won’t be out of the loop entirely, of course, and humans will still make decisions on care. But greater levels of autonomy in machines could create a level of individuality for systems, so one system’s results and conclusions might not match another’s.
The Health and Human Services Department is addressing those kinds of concerns, saying a key for AI health care systems going forward is to develop processes and policies that ensure AI methods — and therefore, results — are transparent and reproducible. HHS’ Agency for Healthcare Research and Quality is working on developing interoperable standards for AI in health care. And it’s even using AI to help, collecting smartphone data, integrating social and environmental data and promoting the idea of AI competitions to help develop new techniques.
AHRQ and the Office of the National Coordinator for Health IT will also work with other HHS agencies, including the National Institutes for Health and FDA, to explore other ways of using AI to improve patient care related to the government’s work toward precision medicine.