But even pristine internal data has limitations. Sparnon notes that HTM organizations must choose commercial partners whose AI models are trained on similar data sets, support local validation, and can normalize annotations or ticket types for meaningful benchmarking. High-quality data, she adds, isn’t just operationally critical, it’s a business asset. And without contracts that acknowledge its value in model development, organizations risk leaving revenue — and insights — on the table.
Sparnon also cautions against rushing to implement AI within existing CMMS platforms without first addressing foundational data quality. “The good news,” she says, “is that focused attention on data quality now will set you up for the next generation of insights, collaboration, and automation.”

Ad Statistics
Times Displayed: 29715
Times Visited: 727 Stay up to date with the latest training to fix, troubleshoot, and maintain your critical care devices. GE HealthCare offers multiple training formats to empower teams and expand knowledge, saving you time and money
Her advice to HTM teams evaluating AI? Start with a thorough review of inventory and documentation practices. AI is only as good as the data behind it — and that’s where AAMI EQ56 comes in. The updated standard outlines core elements of a strong equipment management program, from inventory control to quality oversight, giving HTM professionals a solid foundation for safe, data-driven decision-making.
And safety, Hanna reiterates, can’t be an afterthought. “Security should always be a priority,” he says. Proven tactics—data minimization, anonymization, encryption—still apply, just as they do for any software. But for sensitive operations, he advises going a step further: deploying edge processing or running systems locally to maintain control and protect patient data.
“AI systems must be transparent and reliable to support HTM professionals in their work,” Hanna says. Fortunately, new strategies are making AI not just smarter, but more trustworthy.
To boost reliability, NVRT Labs is turning to confidence scores; a system that ranks AI-generated outputs by certainty. When confidence dips below a set threshold, the task is flagged for human review. The result: AI handles routine duties, while HTM professionals take over when stakes are high, streamlining workflows without compromising safety.
“Internally, we validate performance with audit logs and usage tracking to help us refine prompts, tune behavior, and ensure alignment with safety and performance goals,” he says. That mix of transparency, control, and continuous oversight is what will drive AI’s responsible — and sustainable — growth in HTM, Hanna anticipates.
Looking ahead
Yes, the future is now, and HTM is arguably just beginning to tap into AI’s potential. Predictive maintenance, asset optimization, smarter training, and more efficient operations are on the horizon. But as experts like Macht, Sparnon, and Hanna emphasize, realizing that potential depends on a disciplined approach to ethics, data integrity, and cybersecurity.