NVIDIA Clara Holoscan streams data from medical devices and accelerates key workflows in processing and producing medical images (Photo courtesy of NVIDIA)

NVIDIA unveils AI computing platform for streaming medical device data

November 18, 2021
by John R. Fischer, Senior Reporter
NVIDIA has unveiled a new AI platform capable of streaming data from medical devices.

Known as NVIDIA Clara Holoscan, the solution is designed to facilitate end-to-end processing and does so by providing a computational infrastructure that connects medical devices and edge servers. This allows for developers to create AI microservices that run low streaming applications on devices and pass more complex tasks to data center resources. This accelerates the processing, predicting and visualizing of data in real time for AI-supported medical technologies.

The platform is expected to help device makers expand the reach of their devices to data centers for applications in robotic surgery, mobile CT scans, bronchoscopy, interventional radiology and radiation therapy planning, among other areas. With Clara Holoscan, third-party platforms can build their own products, update medical instruments, as well as create libraries, AI models and reference applications in ultrasound digital pathology, endoscopy, digital pathology and more, according to mobilhealthnews.

“The platform allows developers to add as much or as little compute and input/output capability in their medical device as needed, balanced against the demands of latency, cost, space, power and bandwidth,” wrote Kimberly Powell, vice president of Healthcare at NVIDIA, in a blog post announcing the release of the solution.

The solution is software-defined, meaning it can be upgraded over time. Its main objective is to speed up the workflow phases of high-speed I/O, physics processing, image processing, data processing and rendering.

For high-speed I/O, the platform streams data directly from the sensor to the GPU memory for ultra-low-latency downstream processing. A visual is then created from the data, such as an image reconstruction of an X-ray, and the image is fed to AI models to detect, classify, segment or track objects. Developers can combine it with other previously acquired images and add supplemental information such as data from EHRs. The data is then rendered in 3D format, in real time, as an interactive cinematic rendering, or in augmented reality with Cloud XR.

These representations can offer various advantages to clinicians, including the ability to better visualize an organ or tumor being segmented, according to Powell. “With an end-to-end platform for deployment, it’s easier for companies to upgrade their install base, bringing new research breakthroughs to the day-to-day practice of medicine.”