How about spending minutes, not hours, tuning CT and MR scan data for 3D models of a patient's anatomy?
When MIT Media Lab's Steven Keating, Ph.D., then a 26-year-old grad student at its Mediated Matter group, found he had a brain tumor, now safely removed, he grew curious to see his own brain before his surgery to better understand what he had and the therapy options he faced.
He collected all his scans and tried to prepare them for printing, but grew frustrated with the tools at his disposal, which were cumbersome and inaccurate.
So he reached out to his lab colleagues, who were researching new ways to print 3D models of biological samples.
"It never occurred to us to use this approach for human anatomy until Steve came to us and said, 'Guys, here's my data, what can we do?'" says Ahmed Hosny, who was a Research Fellow at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute.
A loose collaboration followed, including scientists at Wyss, as well as researchers and physicians at centers in the U.S. and Germany, that has now developed a novel technique to easily and quickly convert medical images into models with heretofore unattained detail, they reported in the journal 3D Printing and Additive Manufacturing
"I nearly jumped out of my chair when I saw what this technology is able to do," recalled co-author Dr. Beth Ripley, assistant professor of radiology at the University of Washington and clinical radiologist at the Seattle VA. "It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis."
The problem is that the volumes of data from imaging like MR and CT are loaded with so much detail that the points of interest can get lost. This requires that you highlight the things you want to see to distinguish it from surrounding tissue – a very time-intensive process called "segmentation" in which a radiologist must actually trace the objects of interest on every single slice, by hand.
The alternative is automatic "thresholding", in which a computer converts grayscale pixels into either solid black or solid white pixels, depending on a specified “threshold” between black and white.
Unfortunately, since medical data has many ill-defined borders between objects, both computers and hand methods tend to over- or under-exaggerate features and lose vital details.
The new approach is both fast and accurate – thanks to the use of “dithered bitmaps”, which is a file format in which grayscale pixels are converted into black and white pixels of different densities to create shades of gray and black – quite similar to “half-tones” used to print pictures on paper in newspapers and magazines.
Converting differently-shaded pixels into a mix of black or white pixels with different densities lets 3D printers use two different materials to make the model and preserve subtle detail.
"Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money," said James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute and corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional – we were able to do it in less than an hour."
The new approach could help push 3D printing into even routine exams and diagnoses, and patient education.
"I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor's office for a routine or non-routine CT or MR scan will be able to get a 3D-printed model of their patient-specific data within a few days," said Weaver.
Of additional interest is that the methods for the preparation of the bitmap input files described here were all performed with open-source software using existing image-processing algorithms, noted the researchers, emphasizing that this will permit “for the unconstrained widespread adoption of this approach.”
In May, NHS surgeons used 3D printing to perform a lifesaving kidney transplant
on two-year-old Dexter Clark.
They used Stratasys’ multi-material 3D printing technology plan their procedure prior to implanting a kidney from Dexter’s father into the young infant’s abdomen. The hospital is the first of any to use this technology to map out a successful transplant of an adult kidney into a small child with anatomical complexities.
“The ability to print a 3D model of the patient’s anatomy in varying textures, with the intricacies of the blood vessels clearly visible within it, enables us to differentiate critical anatomical relations between structures,” Pankaj Chandak, the transplant registrar at Guy’s and St Thomas’ NHS Foundation Trust, told HCB News. “The flexible materials also allowed us to better mimic the flexibility of organs within the abdomen for simulation of the surgical environment.”