by John W. Mitchell
, Senior Correspondent | February 05, 2019
From the January/February 2019 issue of HealthCare Business News magazine
In popular culture, Pokémon Go is probably the most well-known example of augmented reality, a new type of technology that takes actual physical environments and overlays virtual components.
But using your iPhone to capture cartoon characters around the neighborhood is just one example of what AR can do, and medical imaging is full of promising applications.
HealthCare Business News spoke to Ian Watts, a computing science graduate student at the University of Alberta to find out about an AR application he developed called ProjectDR, which allows CT and MR scans to be displayed directly on a patient's body in a way that moves with the patient.
HCB News: As you developed this technology, what was the need that you were trying to meet?
AR is a relatively new field with a lot of excitement around it, so we are looking to explore whether AR systems are viable for medical applications. We can produce more intuitive ways to interact with medical data in real time, increase perception and potentially lead to improvements in patient outcomes.
The specific need we are trying to meet with ProjectDR is to reduce the difficulty in locating and working with anatomy under the skin by providing more information to the clinician and providing an improved experience for the patient. Information can be gained from viewing medical images on a monitor, but it can still be challenging to use that information while working with a patient.
Our goal was to develop a tool for clinicians performing manipulations on a patient's spine. The clinician must find the correct vertebrae to work with by palpating or using visual indicators on the patient's skin. This is a challenging task which leaves the clinician prone to errors. With ProjectDR, the medical images can be overlaid and mapped to the patient to provide clear visuals of their spine and greater spatial awareness of the rest of their anatomy.
HCB News: How did the actual development process go? Were there any particularly challenging hurdles?
The development process for ProjectDR was iterative and progressed through various stages of testing combinations of hardware. The first concept was created as a class project in the computing science department using a handheld laser projector, motion capture cameras and markers for tracking. It could only display basic 3D models. However, it worked well enough to give us a solid prototype to improve upon and modify.
The next version was created with substantially improved graphical features, such as volume rendering for displaying CT scan and MR images. The handheld projector was changed to a larger and brighter LED projector and we built a sturdy frame to suspend and move all of the parts overhead of a table work space. The current iteration features a user-friendly interface and many quality of life improvements. There is also the ability to interface with more types of hardware for motion and eye tracking and depth sensors like the Microsoft Kinect and Magic Leap.