Top AI stories of the year

January 11, 2023

As the healthcare industry continues to grow more familiar with what artificial intelligence is, as well as what it isn’t, new research is providing an increasingly clear-eyed look at how deep learning could fundamentally change medicine. Here, presented in chronological order, are the ten most-read AI stories of the year from our Daily News online.

IBM sells Watson Health to investment firm Francisco Partners

In January, global investment firm Francisco Partners bought the data and analytics business of IBM’s Watson Health.

The company took various parts of Watson Health, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex and Imaging Software Offerings. This gives it access to a broad array of health data, which it plans to use to create a new stand-alone company. It also will retain IBM’s management team for the data and analytics business, according to TechCrunch.

While neither company disclosed terms, previous reports said IBM was aiming to take away $1 billion from the sale. "Today's agreement with Francisco Partners is a clear next step, as IBM becomes even more focused on our platform-based hybrid cloud and AI strategy. IBM remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT,” Tom Rosamilia, IBM Software's senior vice president, said in a statement.

The current management team will continue to serve in similar roles as part of the new company and serve existing clients in life sciences, provider, imaging, payer and employer, and government health and human services sectors.

Francisco Partners will also add additional support to the new company. “Partnering with corporations to execute divisional carve-outs has been a core focus of Francisco Partners,” said Justin Chen, principal at Francisco Partners.


Oxipit scores CE Mark for first autonomous AI application, ChestLink

Oxipit nabbed European approval for ChestLink in April, the first AI application designed to conduct diagnostic evaluations of chest X-rays alone without help from radiologists.

The company scored the CE Mark for its solution, which it says will help alleviate radiologist shortages by automating between 15% to 40% of daily reporting workflows, depending on the type of medical institution. It can now be deployed in 32 European countries.

When highly confident that X-ray scans show no abnormalities, ChestLink produces final reports for healthy patients. Oxipit says that these X-rays would appear normal to trained professionals and that by automating the work for them, ChestLink gives them more time to focus on patients and complete other important tasks. It takes into account aspects of patient age, clinical context and varying radiologist subjectivity.

Fully autonomous ChestLink operations in a clinical setting are expected to start in early 2023, according to company CEO Gediminas Peksys. “The sensitivity metric of 99% has translated to zero clinically relevant errors at our deployment institutions during the application piloting stage.”

The solution is based on Oxipit ChestEye, an earlier platform developed by the company for preliminary chest X-ray reports. Oxipit ChestEye can identify 75 radiological findings, which makes up approximately 90% of the abnormalities that radiologists encounter daily.


AI proves worth as second reader for mammograms

Based on its ability to detect cancer, artificial intelligence may serve as a sufficient second reader for mammograms and reduce the workload on radiologists.

That’s what researchers in Norway said in an April study comparing the technology’s performance to routine independent double reading in a population-based screening program.

The largest of its kind, the study assessed the use of AI in reading almost 123,000 exams performed on over 47,000 women from four facilities in BreastScreen Norway, the country’s population-based screening programs. Such programs conduct so many mammograms that produce significant workloads for radiologists and can lead to backlogs and longer waiting times for patients. And while AI has shown encouraging results in identifying cancer, its use in real screening settings is limited.

Using a commercially available AI system, the researchers were able to identify and eliminate a high percentage of benign exams from their workload, as well as find the majority of screen-detected cancers. Less than 20% of screen-detected cancers were not identified.

"Based on our results, we expect AI to be of great value in the interpretation of screening mammograms in the future. We expect the greatest potential to be in reducing the reading volume by selecting negative examinations,” said Solveig Hofvind, from the Section for Breast Cancer Screening, Cancer Registry of Norway in Oslo, who led the study.

The findings were published in Radiology.


AI-enabled ECG detects risk of stroke and cognitive decline

AI-enabled ECG algorithms may be able to detect more than just risk of atrial fibrillation, but stroke and cognitive decline as well.

One-third of ischemic strokes are associated with atrial fibrillations and can affect a person’s cognitive functioning. In previous studies, AI-enabled electrocardiography has identified brief episodes of atrial fibrillation and been able to predict the risk of it by up to 10 years prior to a clinical diagnosis.

In May, researchers at Mayo Clinic found that the technology may also be able to identify patients who are greater risk of cognitive decline. The algorithm showed that a high probability of atrial fibrillation was connected with the presence of infarctions (incidents of cerebral stroke) on MR.

The name of the study is Artificial Intelligence-Enabled Electrocardiogram for Atrial Fibrillation Identifies Cognitive Decline Risk and Cerebral Infarcts. “Artificial intelligence-enabled electrocardiography acquired during normal sinus rhythm was associated with worse baseline cognition and gradual decline in global cognition and attention. The findings raise the question whether initiation of anticoagulation is an effective and safe preventive strategy in individuals with a high AI-ECG algorithm score for reducing the risk of stroke and cognitive decline,” said Dr. Jonathan Graff-Radford, a Mayo Clinic neurologist and the study's corresponding author.

Assessing sinus-rhythm ECGs of 3,729 patients, the authors found that ECG atrial fibrillation scores correlated with a lower baseline and faster decline in cognitive scores. One-third of patients also had an MR, and cerebral infarcts detected with it were linked to high atrial fibrillation probability.


GE Healthcare and Alliance Medical partner AI for UK radiology operations

GE Healthcare and Alliance Medical, an imaging provider in Europe and the U.K., announced in May they would use advanced data analytics and AI solutions to help British radiology departments bolster their productivity.

The two are working on a digital solution that will streamline daily management and apply proactive problem solving to high-patient volumes, schedule disruptions and inconsistent processes. They are aiming to open up patient access to diagnostics and establish standardization within practices, while reducing staff burnout.

To do this, the two will use multiple data analytics tools and remote collaboration products designed by GE Healthcare to help radiology departments optimize operations, introduce more consistency, facilitate virtual collaborations with experts and make care more cost-effective and faster. Alliance Medical will provide clinical expertise on patient care, daily management of patient pathways and problem solving.

“The future of healthcare information is around how to manage and collate data to improve the decision-making, patient pathways and ultimately, in the case of radiology, speed of diagnosis. A digital partnership like this offers a new level of visibility to radiology departments to help manage the high patient volumes,” Simon McGuire, general manager of GE Healthcare Northern Europe, told HCB News.



Researchers diagnose Alzheimer's with one MR scan and AI

A single MR scan, combined with AI, may be the only tool needed to make an accurate Alzheimer’s diagnosis.

In June, researchers at Imperial College London identified the disease in 98% of cases, with a standard 1.5T MR scanner and machine learning technology that detected structural feature changes within the brain, including in areas not previously linked to Alzheimer’s. They also distinguished between early and late stages with fairly high accuracy, in 79% of patients.

They say the approach could enable faster diagnoses and earlier ones that allow patients to get support and treatment in advance. It also could help to understand brain changes that trigger the disease and identify participants for clinical trials for drugs and lifestyle changes, which is currently difficult to do.

Professor Eric Aboagye, from Imperial’s department of surgery and cancer, who led the research, told HCB News that current diagnoses are made with multiple memory and cognitive tests and brain scans that take weeks to arrange and then process. “This will provide a simple and accessible, yet accurate, method that will become the main tool for identifying Alzheimer’s.”

Aboagye and his colleagues tested their technique on scans from more than 400 early- and later-stage patients, as well as healthy controls and those with other neurological conditions. They also applied it to data from over 80 patients undergoing diagnostic tests for Alzheimer’s at Imperial College Healthcare NHS Trust.


Aidoc poised to expand AI beyond radiology via $110 million funding round

In June, AI software developer Aidoc announced it raked in $110 million in a Series D investment round that will go toward expanding its AI Care Platform throughout the entire hospital enterprise.

Founded in 2016, Aidoc designs AI-based software to detect and alert radiologists to critical issues found on CT and X-ray scans, to help triage and prioritize cases. Its AI Care Platform has 15 FDA-cleared clinical applications, including for stroke, pulmonary embolism and brain hemorrhages.

This latest round brings the company’s total funding to $250 million, which it will use to make its platform a tool to help various healthcare professionals as they grapple with challenges created by staff and supply shortages and rising prices.

The round was co-led by global investors TCV and Alpha Intelligence Capital, with participation from AIC’s co-investor CDBI Capital. “Our aim is to massively ramp up our AI Care Platform to cover both the various hospital medical service lines and the depth of integration into the clinical workflows, empowering hospitals to activate cross-specialty care teams and deliver the best quality of care in a scalable, efficient way to patients,” said Elad Walach, CEO of Aidoc, in a statement.

The company previously raised $27 million in 2019, followed by $47 million in 2020 and $66 million in 2021.


UPMC partners with Microsoft, using clinical analytics to improve care

The University of Pittsburgh Medical Center said in July that it integrated Microsoft’s cloud computing, AI and machine-learning tools into its clinical analytics operations to adjust care protocols and foster better health outcomes.

Outlined in a five-year agreement, UPMC will use these solutions to mine more than 13 petabytes of structured clinical data and 18 petabytes of imaging data to develop care insights.

Using clinical data to adjust COVID-19 treatments, the hospital reduced in-hospital mortality during the pandemic and is now applying the same concept to other areas, including diabetes mellitus and post-surgical adverse outcomes.

“We’re on a quest to become a true data-driven organization, a ‘learning health system’. We can do this only if analytics are embedded in everything that we do — from the executive suite to our clinicians at the bedside,” said Dr. Oscar Marroquin, chief healthcare data and analytics officer, in a statement.

During the pandemic, UPMC used clinical and financial data to reduce in-hospital mortality month-to-month. It has replicated these efforts with diabetes mellitus, which is associated with a higher risk for other conditions and adverse outcomes, especially in those with poor control of their disease.

Using historical data from more than 170,000 diabetic patients, the analytics team built a machine-learning model to predict those at highest risk before they reach that point, enabling endocrinologists to offer these patients diabetes educators.


Addressing bias in radiology machine learning systems

Suboptimal practices in the development of machine learning systems put them at risk of producing biased insights when applied in radiology.

In September, researchers at Mayo Clinic announced they had come up with several strategies for addressing developmental problems and eliminating the risk of biased information, with the first focusing on the data handling process and the 12 suboptimal practices associated with it.

"If these systematic biases are unrecognized or not accurately quantified, suboptimal results will ensue, limiting the application of AI to real-world scenarios,” said Dr. Bradley Erickson, professor of radiology and director of the AI Lab at the Mayo Clinic, in Rochester, Minnesota, in a statement.

The data handling process consists of data collection, data investigation, data splitting and data engineering.

The researchers recommend in-depth reviews of clinical and technical literature and working with data science experts to plan out data collections. They also say collections should come from multiple institutions in different countries and regions, use data from different vendors and different times, or include public data sets to incorporate diverse data sets.

"Creating a robust machine learning system requires researchers to do detective work and look for ways in which the data may be fooling you,” said Erickson. "Before you put data into the training module, you must analyze it to ensure it's reflective of your target population. AI won't do it for you."

The second and third reports discuss biases that occur when developing and evaluating the model, and when reporting findings.

The findings were published in Radiology: Artificial Intelligence, a journal of the Radiological Society of North America.


iCAD announces AI partnership with Google Health

iCAD has announced that it will be entering into a strategic development and commercialization agreement with Google Health to integrate Google’s artificial intelligence (AI) technology for breast cancer and personalized risk assessment into iCAD's portfolio of breast imaging AI solutions.

iCAD will use Google’s technology to improve its 3D and 2D AI algorithms and will commercialize developed products. iCAD will also leverage Google Cloud's secure, scalable infrastructure, which the company says will accelerate the time to market for iCAD's cloud-hosted offerings.

This is Google’s first commercial and clinical partnership after publishing research in 2020 that found its breast cancer detection algorithm outperformed radiologists.

“After several years of research and testing, Google was looking for a partner to commercialize its mammography AI technology,” Stacey Stevens, president and CEO of iCAD Inc. told HCB News. “Google selected iCAD for this partnership because of iCAD's legacy of leadership in the breast AI space and its well-established install base.”

iCAD has more than 7,500 licensed installations of its breast health solutions, including products such as ProFound AI Risk for Digital Breast Tomosynthesis (DBT), a clinical decision support tool that provides a short-term breast cancer risk estimation that is personalized for each patient based on their mammogram.

“iCAD's commitment to innovation is driven by elevating the quality of breast care delivered to every woman, everywhere and this partnership will allow us to expand and extend the impact of our technologies on a global scale,” Stevens said.

Both iCAD's and Google's technology have been shown to improve efficiency and accuracy for radiologists reading mammography, however they have been trained on different data sets, Stevens said.

Using Google Cloud will also help the company bring breast screening tools to underserved regions that are constrained by infrastructure challenges.

“By combining our market experience and deep tradition of pioneering innovation in the fight against breast cancer with Google's technological expertise and patient insights, we can work together to advance healthcare solutions that make a difference for individuals, caregivers and health professionals,” Stevens said.

Greg Corrado, head of Health AI for Google, explained that collaborating with companies like iCAD is a key part of its strategy.

"Google Health's AI tech could be used to make healthcare more available, more accessible, more accurate,” Corrado said in a statement announcing the collaboration. “But effecting change like this will only be possible if we work closely with forward-looking partners; those with a deep tradition of pioneering innovation and the market experience and wherewithal to put innovations into real workflows. The entire ecosystem needs to work together to advance healthcare solutions that truly better serve patients, doctors, and health systems. Google Health working with iCAD is a great example of two organizations coming together to leverage our mutual strengths, technological capabilities, and resources to improve breast cancer screening worldwide, with the ultimate goal of improving health outcomes of individuals and communities."