Improving radiology through feedback and metrics

August 09, 2017
By Dr. William Moore

Health care informatics has many facets and is ripe with opportunities for improvement in patient care.

I recall during residency always begging for feedback on our reads. In fact, I remember one attending who would specifically add "feedback would be appreciated" to the end of many of his reports. As you could imagine, he got little or no response to these requests. For years, many of us would carry around pieces of paper, notebooks or some other manual process to follow up on interesting cases. This required us to be fastidious with our notes, but was biased toward “interesting case.”

As the PACS system became more ubiquitous, we demanded the ability to save cases. This was a revolutionary step forward, allowing saving and retrieval of cases with previously unthinkable ease. However, this practice was also flawed. The primary issue is that these systems relied on a human intervention in order to record and follow up on each case. Further, the case selection was biased toward cases the radiologist or the resident determined were interesting enough to warrant the effort to follow up. As the volume of imaging studies increases and reimbursement drops, productivity is critical and one’s ability to spend a significant amount of time on educational, quality assurance and or quality improvement processes is limited.



In order to make systems that solve a problem, we must have an understanding of what the issues are and what a success would look like. Lack of feedback on radiology reads is a classic problem that is perfect for an informatics solution. In fact, in the book “Black Box Thinking” by Matthew Syed, mammography is used as an example of why people don’t learn from their mistakes. Currently, most health care facilities have an electronic medical record. These EMRs are packed with data organized in a variety of ways, but typically not organized in a manner designed around a radiologist workflow. Hunting though an EMR for correlative results such as pathology can be a laborious task taking several minutes in the best scenario.

We decided to take this on as a project. With all projects we go through a process of understanding the issue, planning the solution(s), determining what success would look like and determining metrics. The problem we are looking to solve is lack of feedback on the cases read by a radiologist. The importance is related to maintaining quality at the individual and group level, potentially improving quality and uncovering potential systematic bias in reporting/interpretation.

What would success look like, a.k.a. what are the system requirements? A successful system would need to be simple, robust and as inclusive as possible with rapid feedback. Ease of use is critical. A system would be considered a failure if the system was cumbersome, required extensive clicking or lacked specificity. To determine success, we created the following metrics: radiologist determining concordance or lack of concordance (called learning opportunities), or not applicable; and we monitored the percentage of cases marked as either concordant, learning opportunity or not applicable.

Our solution leveraged our infromatics partners, where we innovated an automatic feedback loop between radiology reports and pathology reports. In order to accomplish this feedback loop, we matched the potential pathology specimens with a correlative radiology body part. So, for example, a surgical lung specimen would correlate to a CT or MRI of the chest, but not a cardiac CT. We chose to only include cross-sectional studies to limit the number of correlations. Feedback was accomplished in two ways: via an email alert for all cases with an imaging report and a corresponding pathology result; and via a module that shows these matching results in a tabular format. In both of these formats, a single click by the radiologist results in a concordance, learning opportunity or not applicable designation. This system met all requirements and was released to all radiologists at NYU.

During our post go-live assessment, we determined that the number of cases that were marked as concordant or learning opportunity by radiologists was low. Despite the fact that the system met all requirements, adoption was limited. After discussion of this system with users, one of the key limitations for adoption was the volume of emails and matches that were generated. Many pathology results have several addenda. In some cases, as many as five emails were generated on a single pathology report resulting in email fatigue. With this feedback in hand, we changed the system to only send one email with each potential match. This change resulted in a significant decrease in the number of email alerts and a significant increase in the percentage of cases marked by radiologists.

This system shows a classic example of a product life cycle. There is a clearly defined problem, with clearly demonstrable expectations of a successful solution. The system was designed to meet the needs of the end-user. After the system was deployed, repeated measures showed a lack of success marking the need to iterate. Direct feedback from the end-user directed the redesign and dramatically improved compliance. Continued assesment of metrics and redesign based on user feedback is essential in any new system.

Another issue is the lack of visibility of the radiologist. This is considered one of the top 10 threats to radiology as a specialty. Before the advent of PACS, physicians would come to the radiology department because we controlled all the images from studies. Now, with the distributive model of imaging, where every computer can display images, physicians are less likely to interact directly with the radiologist. However, during this transition to PACS, medical imaging went through a revolution. Clinicians became heavily reliant on imaging for many aspects of patient care and, as a result, the volume of imaging studies exploded.

Despite the critical role that radiology was and is playing in patient care, most patients don’t even know that radiologists exist. Less than half of patients know that radiologists are physicians, and most think that the radiologist is the person who performs the exam, not interprets the exam. I remember distinctly at a family party someone asking me if being a radiologist was a real job. I did my best to laugh it off.

Technology offers many solutions to this problem. As in most business arenas, travel and physical presence at meetings has become less important. This is true of radiology. Why not leverage webex capabilities and screen sharing to close the gap with the physicians caring directly for these patients?

Using our current systems we were able to create a virtual consultation service where, if physicians had questions, they could, within our EMR, click a button to access the appropriate section. The message comes as an instant message to the radiologist in patient context. This way, with one click we are able to access the patient’s imaging studies and interact with the clinician either via IM or over the phone. As our department continued to increase in size and become more widespread, we used this tool to perform virtual ICU rounds. Thus, allowing us to maximize our time and our clinicians’ time while not losing the interaction with our colleagues and continuing to provide outstanding patient care.

We have highlighted two areas of innovation which have been implemented at NYU. However, a critical eye is needed for all of our current processes. We need to leverage our data to improve efficiency, improve turnaround time, and most importantly, improve patient care. As new systems develop, we need to keep our primary goals of outstanding patient care, research and education front and center in our minds to ensure that each new system and each new program result in improvement in one of these areas.
William Moore

About the author: Dr. William Moore is the chief of thoracic imaging and the clinical director of radiology information technology at New York University. Dr. Moore trained at Stony Brook University where he did his residency in radiology as well as his fellowship in medical informatics. Dr. Moore did his subspecialty training in thoracic imaging at NYU. He was the residency program director and the vice chair of education at Stony Brook before he returned to NYU in 2015.