by John R. Fischer
, Senior Reporter | May 04, 2022
DiA's LVivo Seamless algorithm and Intel's OpenVINO toolkit will speed up the processing time for cardiac ultrasound images.
DiA Imaging Analysis and Intel are utilizing AI to speed up the time it takes to process cardiac ultrasound images in hospitals.
DiA is combining its LVivo Seamless algorithm with the Intel Distribution of OpenVINO toolkit to automatically assess cardiac ultrasound images in a way that optimizes processing time by over 40% while retaining accuracy.
LVivo Seamless runs automatically on ultrasound cardiac exams to find and assess the most optimal views and produce key measurements that help identify clinical indications that are hard to find visually or manually. This allows sonographers and cardiologists to assess higher scan volumes quicker and in a reproducible way.
Intel’s edge-to-cloud infrastructure technology makes these assessments easier by eliminating the need for a discrete graphics processing unit (GPU) or the integration of more complex IT infrastructures. As a result, the LVivo Seamless software’s AI-based models are optimized to complete analyses in less time when running on local hospital IT infrastructure that uses Intel Core processors.
"In most cases it is impossible to modify or improve hospital hardware infrastructure. Intel's solution allowed us to gain a significant improvement in processing time in a short development time that otherwise would have required much more effort and resources if we had tried to do it ourselves. In addition, before engaging with us, Intel implemented OpenVINO with other medical imaging companies with great success, and we believed we can gain substantial results with minimal risk by working with them," Hila Goldman Aslan, CEO and co-founder of DiA, told HCB News.
LVivo Seamless eliminates manual and visual steps needed for cardiac ultrasound view selection and measurement, thereby providing users with more time to devote to other critical tasks in the echocardiography environment.
Intel’s toolkit is equipped with deep learning models, device portability, and higher inferencing performance with fewer code changes. Its core processor unit technology has fewer constraints than GPUs and is capable of accelerating complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. As a result, it is capable of accelerating the workloads that deep learning inference applications like DiA’s LVivo Seamless can process.
Intel previously partnered on a similar endeavor with Philips
in 2018 where the two evaluated the efficiency of CPUs when applied to use cases in deep learning inference models. They found that this helped meet objectives faster and provided consumers more affordable access to AI solutions.