by John R. Fischer
, Senior Reporter | August 13, 2020
A new deep-learning technique in development could identify COVID-19 patients faster from CT scans.
Researchers at the University of Notre Dame in Indiana are putting together a new system that utilizes an approach for detecting and extracting specific features associated with the disease from chest CT scans. The solution is expected to reduce the burden radiologists face in having to screen each image, as they work overtime to manage the large number of suspected cases, as well as overcome limitations created by challenges that have arisen around the use of other standard COVID-19 tests.
"There is a global shortage of test kits in many countries, and patients have to endure lengthy waiting time to get results," Yiyu Shi, associate professor in the department of computer science and engineering at Notre Dame and the lead researcher on the project, told HCB News. "It typically takes two to three days, sometimes more than a week if the number of tests pile up in epicenters. Yet positive cases should be identified as quickly as possible so that patients can be quarantined immediately. CT imaging is anyway routinely used to evaluate patients with suspected pneumonia, so it is just one more step to identify those with COVID-19."
For those who need to move fast and expand clinical capabilities -- and would love new equipment -- the uCT 550 Advance offers a new fully configured 80-slice CT in up to 2 weeks with routine maintenance and parts and Software Upgrades for Life™ included.
CT scans can reveal visual signs of COVID-19, which appears as a haziness on images of the lungs. One common sign is ground glass opacities, which are made up of abnormal lesions. The method developed by Shi and Jingtong Hu, an assistant professor at the University of Pittsburgh, utilizes 3D data derived from CT scans to quickly detect and show visual features of COVID-19-related pneumonia. It was inspired by Independent Component Analysis and uses a statistical architecture to break each image into smaller segments, which allows deep neural networks to target COVID-19-related features within large 3D images.
To put it into practice, the two are combining analysis software with off-the-shelf hardware to create a lightweight mobile device that can easily and immediately be integrated in clinics nationwide. The plug and play device will take the 3D CT image of a patient as the input and use a statistical neural network to predict whether it is a possible COVID-19 case, or just a regular pneumonia. No internet access is required to upload patient data to the cloud, which could raise security and privacy concerns.
The most challenging part of the method is to allow the computationally expensive deep learning for 3D medical images to run on a resource-constrained device. This is due to the fact that 3D CT scans are too large to detect and extract specific features from efficiently and accurately on plug-and-play mobile devices.
The two, however, are confident that the method will be usable and will improve workflow and throughput for radiologists in an affordable manner.
"The approach has a low cost overhead (assuming a CT scan needs to be done anyway for lung examination for suspected pneumonia)," said Shi. "Results will be immediately available (this can be ready in just minutes after the CT is done); and data will remain secure, safe, and private (data does not need to be uploaded to servers or cloud)."
Shi and Hu are working with radiologists at Guangdong Provincial People’s Hospital in China and the University of Pittsburgh Medical Center, which is providing a large number of CT images from COVID-19 pneumonia for the project.
The research is being funded by the National Science Foundation through a Rapid Response Research (RAPID) grant.
The team is aiming to complete development by the end of the year.