The algorithm can pick out tiny abnormalities as
small as 100 pixels in size

Algorithm beats two out of four radiologists in detecting brain hemorrhages

October 23, 2019
by John R. Fischer, Senior Reporter
An algorithm under development has proven its worth by beating out two of four expert radiologists in identifying tiny brain hemorrhages in head CT scans.

The results hold promise for faster and more efficient treatment of patients who suffer traumatic brain injuries, strokes and aneurysms, with the AI technology sifting through thousands of images daily to point out significant abnormalities for radiologists to examine faster and more closely, according to researchers at UC San Francisco and UC Berkeley.

“The providers who could benefit from this algorithm include those in radiology for faster interpretation with fewer misses, as well as neurosurgery, neurology and emergency medicine for faster initial interpretation and demarcation of abnormalities directly on images,” Dr. Esther Yuh, associate professor of radiology at UCSF and co-corresponding author of the study, told HCB News. "Many patients are also highly interested in seeing and understanding their own images to better understand their condition.”

The number of images for each brain scan can amount to so much that radiologists may at times rely on mice with frictionless wheels to scroll through large 3D stacks of images in movie format to search for tiny abnormalities that indicate life-threatening emergencies. Some spots on the order may be 100 pixels in size and in a 3D stack of images with over a million of them, making it possible for even expert radiologists to miss them and potentially leading to grave consequences.

The algorithm made these determinations in cases of hemorrhage in one second, tracing the detailed outlines of the abnormalities it found to show the location within the brain’s three-dimensional structure. Among its findings were small abnormalities missed by experts that the algorithm classified by subtype. It also was able to determine whether an entire exam, consisting of a 3D stack of approximately 30 images, was normal.

The powerhouse behind the algorithm is a fully convolutional neural network (FCN), which trained it using 4,396 CT exams. Training was especially extensive, with each small abnormality manually delineated at pixel level and a number of steps taken to prevent the model from misinterpreting random variations or "noise" as valuable. In addition, researchers fed only a "patch" of an image at a time, contextualizing it with ones that directly preceded and followed it in the stack. This allowed the algorithm to be extremely accurate and learn from the relevant information in the data without "overfitting" the model by making conclusions based on insignificant variations within the data. They called the model, PatchFCN.

The algorithm also included information that physicians needed to determine the most optimal treatment, and made all its findings with an acceptable level of false positives. This reduced the amount of time needed to review results.

“We wanted something that was practical, and for this technology to be useful clinically, the accuracy level needs to be close to perfect,” said Yuh in a statement. “The performance bar is high for this application, due to the potential consequences of a missed abnormality, and people won’t tolerate less than human performance or accuracy.”

The authors are currently evaluating the algorithm’s use in assessing CT scans from trauma centers nationwide, as part of a research study headed by Dr. Geoffrey Manley, professor and vice chair of neurosurgery at UCSF.

Funding was provided by the California Initiative to Advance Precision Medicine (California Governor’s Office of Planning and Research) and Swiss National Science Foundation Early Postdoc. Mobility Fellowship 165245. Computing Time was facilitated by Amazon Web Services.

The findings were published in Proceedings of the National Academy of Sciences (PNAS).