From the January/February 2019 issue of HealthCare Business News magazine
By Steven Tolle
In 2013, Trafton Drew, then an attention researcher at Harvard Medical School, conducted an experiment designed to test the theory of “inattentional blindness”, which occurs when people fail to see an object that is in plain sight because they were too focused on looking for something else.
He did this by superimposing a matchbook-sized picture of a man in a gorilla suit onto a series of slides radiologists typically look at when they are screening for cancer. Ultimately, 83 percent of the radiologists did not see the gorilla.
The experiment is a prime example of the power of humans to focus so intently on a specific task – in this case looking for lung nodules – everything else we see can be filtered and shaped by that focus. It’s also a compelling argument for the potential of artificial intelligence (AI) in imaging.
It is our premise that radiologists are very good at finding what they are looking for but not at finding what they aren’t looking for. Radiologists are incredibly proficient at spotting lung nodules. Industry-wide, the miss rate for radiologists looking for specific abnormalities is just 3-5 percent. But that doesn’t mean they are seeing everything.
The near-term potential of AI is to build a safety net that lets us identify the high-value signals that might otherwise not have been the focus. Longer-term, the technology has the potential to revolutionize precision medicine and improve patient care. Make no mistake, a lot still needs to happen before that long-term promise is fulfilled. But many of the critical building blocks are already in place today.
For example, right now, today, we are able to use natural language processing technology to read clinical text from electronic health records (EHRs) and progress notes to identify, categorize, and code unstructured data and turn it into actionable, quantifiable insights on a patient chart. That data is allowing us to highlight potential discrepancies in documentation and provide valuable clinical context to physicians during image interpretation.
This is a critical first step. Researchers from the Medical College of Wisconsin recently found that when radiologists had the time and access to patient charts, they would change their findings between 20 and 25 percent of the time. There is data locked in the patient’s chart that can be critical to a diagnosis.
The next step – which is being tested with radiologists around the world today – is leveraging those text analytics to inform care decisions. Initially, our work here was focused on specific organs, but it is evolving quickly to address specific conditions within entire body systems. What that means is that our technology will soon be able to screen chest X-rays and chest CT scans to help clinicians identify conditions such as emphysema and COPD, aneurysm, pulmonary embolism, pneumonia and others.
We are entering a new world of precision medicine and imaging will play a large role in the evolution. As these technologies and science evolve, they will enable image-based biomarkers that, coupled with liquid biopsies, can be used to identify signals that are consistent with disease.
While it’s tempting to focus on that long-term potential, the most exciting developments around AI in imaging are actually those that are taking shape today. By carefully nurturing this technology, partnering with healthcare providers around the world to train and test it, and aiming for consistent improvements in workflow processes, we are putting the pieces in place that will enable a real, sustainable revolution in healthcare.
About the author: Steven Tolle is vice president, global strategy and business development, IBM Watson Health.