GSA CDO: Data Outcome Feedback Loops Can Improve Predictive Analytics

GSA CDO: Data Outcome Feedback Loops Can Improve Predictive Analytics

Incorporating outcome data into AI models can lead to continuous improvement.

Government agencies may have a new way to improve decision-making backed by data and predictive analytics models, according to General Services Administration Chief Data Officer Kris Rowley.

The proposed model takes preexisting predictive analytics models’ outcome data and feeds it back into the models in a way that includes that data in future analytics and predictions, creating a positive feedback loop that integrates machine learning into decision-making.

Traditional predictive models apply transactional data to analytics and predictions processes, which in turn help inform a decision, Rowley said at ACT-IAC’s Artificial Intelligence and Intelligent Automation Forum Wednesday. This is a three-step process that ends at the decision, but Rowley wants to reapply those final results to the model, creating a cycle of improvement rather than a linear process.

“We’re starting to understand that we can take outcome data and feed it back into models to help better predict future events,” Rowley said.

Rowley’s proposed process does not just mean feeding any outcomes back into the model, however. It is key to create requirements and a methodology to determine how to use outcomes in building predictive models.

At the IA/AI forum, Rowley presented five guidelines he values in applying his methodology to building models. These are to:

  • Decide what attributes define a “successful” or “unsuccessful” outcome from models’ final decisions. This process is not necessarily in a binary, but rather on a scale and can be used to label past data.
  • Develop a hypothesis on features that can impact a successful outcome.
  • Create a model to analyze features and predict which features will lead to successful outcomes in the future.
  • Use the model to inform future decisions, as well as capture outcomes of decisions based on the model.
  • Analyze outcomes from previous decisions and use them to improve and retrain the model.

“We have to build the attributes around the data to make sure that we can build a predictive model, and then, how do you use that model to inform decision making?” Rowley said. “Once this is built up, you have to explain it. … We have to explain how these models can help decision-makers make better and more informed decisions.”

Rowley also stressed his final point on improving predictive models over time, explaining that improvement must be a constant process.

“As soon as you’ve developed the model and you start to make predictions, you’re starting to incur more information and more outcome data and more variables and more ways to putting data into this environment to do better prediction predictability,” Rowley said. “There is a never-ending churn of work that needs to be done to manage and maintain models.”

Even with this guidance in mind, Rowley said that outcome data quality and creating definitions for success in outcome data are some of the biggest challenges in building predictive models.

“The quality of that [outcome] data — it has to be very high quality and labeled correctly and tracked and stored directly to the feedback in, so I think the outcome data and how we manage it is the part that we really need to focus on,” Rowley said.

Rowley’s discussion of using AI and predictive analytics to loop data back into models recurred in other presentations at the forum.

The Treasury Department, for example, launched a chatbot service for its contact centers throughout the country. These call centers were misdirecting 75% of calls they received, said Treasury Program Analyst Jennifer Hill. In improving the call services and chatbot, Hill’s team has used a similar strategy that Rowley has used: applying AI feedback to the AI model to make continuous upgrades.

“AI enables AI,” Hill concluded.

Standard