Inside the AI Pilots for Security, Knowledge Management at CMS

Inside the AI Pilots for Security, Knowledge Management at CMS

The agency is looking at how AI can free up manual workloads as it develops a larger artificial intelligence strategy.

The Centers for Medicare & Medicaid Services is exploring how artificial intelligence can reduce workforce burden through two new pilots focused on knowledge management and information security.

“We are taking small chunks and we are learning and piloting programs, testing out hypotheses,” Rick Lee, a CMS senior technical advisor, said. “We are extracting the most value we possibly can in small increments, building on lessons learned in the knowledge we have gleaned from each one and extending that into larger engagements.” 

For CMS’ knowledge management pilot, the team made data more discoverable and created more relevant search results. AI helped train search models that could sort through large datasets and automate content curation, a challenge for larger organizations like CMS. 

“Knowledge management at enterprises like CMS is a challenge. We are working with multiple content repositories, with multiple styles, without a formalized taxonomy of how that data can be understood by a machine,” Lee said. 

CMS is leveraging AI to free up workloads and manage content in an iterative process. Within this process, AI creates workflows then that workflow is sent to human teams for validation, which will then train the models to continually improve search functions.  

For the second iteration of this pilot, CMS is leveraging AI in a burden-reduction capacity, Lee said. The goal for this is to identify processes and best practices while removing obstacles in a small-scale engagement for CMS to use as it launches a full AI program.

"Machines need to understand and define the data relationships,” Lee said. “Our approach was to conduct a fairly robust data discovery process, where our data scientists went in, made sense of the data and developed taxonomies that machines could understand, so the machine could then interpret data.” 

CMS also developed use cases and leveraged natural language processing to test and refine hypotheses and models. Currently, CMS is working to refine these models and extend them to deliver actionable outcomes. Lee outlined the lessons learned from this pilot program, noting that understanding and mapping technologies are critical steps.   

“Our hypothesis was that we could create a system that would provide some level of confidence in knowledge management and knowledge curation,” Lee said. “We didn’t get there. We learned a lot about a great deal of activities that need to occur upstream from an AI project and that’s yielded tremendous results and value back to CMS.”  

CMS is collaborating with its partners to develop an AI playbook that will outline lessons learned and recommendations for future AI projects. The agency is also developing a taxonomy process to reduce the burden on upstream work.  

“We’re starting to be able to automate that a little bit more,” Lee said. “We’re expanding the lessons learned to other AI initiatives, and we’re realizing conversions of those lessons learned, and that’s delivering new value because we’re not having to spend time reinventing the wheel.” 

For information security, CMS is accelerating compliance to combat against the evolving threat landscape. Much of CMS’ operations are done in silos, which is more of a reactive approach said CMS Senior Technical Advisor Andres Colon.

“Controls are essentially the bedrock of compliance. They establish the technical safeguards that you need to implement in order to secure a federal information system. The challenge of CMS is it’s very laboring. For security compliance, it is very manual and heavily burdensome,” Colon said.  

Colon recommended security teams use plans that inform implementation before developing a specific technology so agencies can configure new tools to meet compliance needs.  

CMS uses the concept of “reusable components,” which serves as a collection of controls that provides samples of security implementation and guidelines for how certain technologies are used. These controls can enable teams to reduce development times by providing pre-approved building blocks. 

“The sooner teams know what is required, the sooner they can secure systems for production. That means they’re more prepared for an assessment. The more prepared they are, the more likely they'll get an authorization quickly,” Colon said.  

In applying AI for security compliance, CMS said that AI could help humans analyze technology to provide reusable controls that could reduce the burden of writing security compliance. This concept evolved to, “Can data science and AI be applied to identify, create and vet reusable components for CMS?” Colon said.  

To test its hypothesis, CMS is creating an automated AI pipeline and applying data science, natural language processing and machine-learning models to the agency’s security plans, then refining the output by bringing subject matter expertise and humans into the process to take findings to the next level.  

“Our primary goal is making it easy for product teams to meet compliance obligations. That will help us deliver faster. We aim to reduce the burden of preparing for authorization by providing reusable compliance components for commonly used technology at CMS. How to do this? In the form of an open, shared component library. AI is really paving the way for us to get there quickly,” Colon said.

 
Standard