Can AI Stop Doctors Likely to Overprescribe Opioids?

How AI Can Stop Doctors Likely to Overprescribe Opioids — and Stem the Crisis


Migration project

All articles

All podcasts

(this only appears for logged-in users)

A tech-heavy RX

Artificial intelligence and machine learning can be used to predict physicians likely to overprescribe opioids and patients likely to come back for more, with the help of historical data. So, why don’t organizations use these anticipatory tools?

Well, they do, just not at their true potential scale yet as some AI companies have discovered. DataRobot is just one of these businesses focusing on automated machine learning platforms. When fed the right data, its platform predicts overprescription. But challenges remain around collaborating and gathering that data from the many different health care players, according to Bill Moschella, DataRobot’s general manager of health care.

There’s so much opioid-related data, so data scientists are eager to access that and build models around it to help fight the crisis. But if humans build those models, there is potential for flaws and errors. Automated machine learning provides the opportunity to expedite building, training and production of these prediction models, and have real impact at the patient and physician level.  

So, What’s the Model and How Does it Predict Overprescribing?

The model uses historical health data to build patient and physician profiles.

The patient profile has a few buckets. The first is clinical and looks for data about the person’s health problem, current medications, if recurring injuries or a singular traumatic injury exist, if the patient has a chronic illness, if the patient is physically fit or active and so on.

Different psychological aspects bridge the gap from the clinical side, and demographic profiles related to age, race, gender, ethnicity, where one lives and works, etc. All this information helps predict whether the patient has a high probability of becoming attached to prescribed medications, or just wants to get healthy, alleviate the pain and move on.

The physician profile looks at the claims and transactions generated from the physician’s office: diagnosis and procedure codes, prescriptions, medical and pharmacy claims, etc. It also considers the physicians’ specialties, their ages, the types of procedures and diagnosis they see on a regularly, their quality scores and outcomes, whether their patients get healthy, the type of insurance their patients typically have, whether the physicians see the same types of patients, how often the physicians are prescribing, their propensity to provide a certain individual with a certain diagnosis and so on.

Combining the Profiles for the Bigger Picture

With an individual patient’s historical data, the machine learning algorithms run millions of different permutations to come up with a mathematical and statistical probability of falling into an addiction.

Combining that clinical probability with the psychological and demographic information on a patient forms a complete prediction probability profile.

“Now I can say, this individual has a high likelihood of probably getting themselves in some trouble here, if they were to have any of the following ailments, and if they were to be prescribed the following drugs,” Moschella said.

Then, a prediction is gathered on the physician side. Does a particular physician have a high likelihood of seeing patients as the one described above, and does he or she prescribe these types of drugs at high quantities?

Taking the predictions on the physician profile and pairing them with the patient side can show that, perhaps, patients with high predictions see physicians with high propensity to prescribe.

Putting These Predictions to Use

When a patient is predicted to have a high propensity to fall into a pattern even before going to the doctor, the model can flag that individual on the pharmacy side, or alert a care manager, or simply provide a notice before the physician is even chosen.

And once they’re flagged, it’s left to logistics: How are these predictions used to make a difference? That’s where Moschella thinks the nation has hit a wall.

Challenges that Remain

This data is difficult to get, for starters. Most health care systems are not what Moschella calls “modern platforms;” they lack open APIs or data streams and have many regulations

Then, there’s deciding who runs these models, and to whom the predictions go. The health care industry is extremely complex; no one player has all the information, but there are systems behind them required to run this data and then put that information into the hands of people.

So, the capabilities are there, the data is there and the predictions are possible.

“What we’re finding is that the market itself tends to fall down when it comes to actually putting these technologies in place,” Moschella said.

What’s the Solution?

Essentially, all the components of the health care industry need to start thinking about how to come together on this and develop an AI strategy.

“And as they think about AI strategy, the AI meaning the overarching umbrella for machine learning with a subset of deep learning here, and then how do you create your different AI products within there,” Moschella said.

For example, a health care payer could determine how to look at risk and intervention, and build an entire opioid-type product as part of its AI strategy with that data, just by applying the machine learning to the data and creating predictions. That payer could offer those predictions as part of a service to providers, and implement a data-sharing system between payer and provider.

But this requires these organizations, government and private, to come together and understand how they can share data and predictions as part of an enterprise strategy for AI and machine learning, and use these kinds of tools to collaborate.

“The bigger picture is there is a huge opportunity here, and if people can really focus on having an AI strategy, including this as an output in their AI strategy, we can make a bigger impact in the world — especially domestically,” Moschella said.

Where’s this Happening Now?

Moschella said there are pockets of both public and private sector touching pieces of this capability, so it’s not absent. But it’s also not as big as it should be from an impact perspective.

“We have to see that happen,” he said. “Health care has the greatest opportunity to receive impact from machine learning.”

There’s a massive push on data, such as the Health and Human Services Department’s Code-a-Thon, where invited coders find solutions to prevent opioid-related deaths using a bunch of aggregated open data.

“What we would like to see is a much more serious effort,” Moschella said, “collaborate effort across all the organizations that hold all of the pieces of data.”

These models are being built and predictions are being made, but it’s limited. More precise and automated predictions come at a much larger scale.

“We need more of these organizations to come together, and have to have an AI strategy,” Moschella added.

Moving Beyond Data

“We’re still seeing a lot of effort on the data side,” Moschella said, and while that’s necessary, perhaps that’s because organizations feel it’s a step-by-step approach, but Moschella argued otherwise.

“You can be applying the machine learning to these things as they start to become available, and that’s more of an agile methodology.”

And while data prediction-building methods are seen in pieces in government and industry, Moschella encouraged the use of automated machine learning to expedite that model building process, forming predictions faster.

But those early adopters are helping.

“They’re evangelizing it, and they’re getting it out,” Moschella said. “If you ask me what does it take to get there, that’s what it takes.”

The health care industry is large and complex, but it’s a follow-the-leader type of market, and as Moshella said, it has the greatest opportunity for machine learning to make a real, human impact.