Dan Brahmy

COVID-19 pandemic leaves the healthcare industry a target for fake news

April 02, 2021
By Dan Brahmy

The COVID-19 pandemic has proven that online social conversations can travel quickly, with the healthcare industry in particular becoming a prime target for disinformation campaigns during the last year. When it comes to facts regarding COVID-19 testing, treatments and vaccines, disinformation has permeated discussions on Facebook, Twitter, Instagram and the like. Unfortunately, some of these conversations include negative sentiment, fake account creation and overall misleading online chatter on specific companies' vaccination efforts, treatment offerings and overall pandemic communication strategies.

As the country struggles to discern the real from fake news around the pandemic, fake profiles and “bad actors” (real accounts with nefarious agendas) are not helping the case. With so much still unknown when it comes to the virus, vaccine rollout and the return to “normal” life, average online users can find it hard to determine the truth while different stories are being shared across the internet.

Where the beginning of lockdown saw a rise in disinformation around the virus itself, recent disinformation campaigns have put COVID-19 vaccines front and center. For example, Cyabra scanned over 390,000 profiles across Twitter and Facebook, who engaged in conversations around the virus and found that 11% of vaccine related posts it analyzed on Twitter were coming from fake accounts. Facebook also demonstrated a high number of profiles using negative sentiment when referring to the vaccines. Further, a study from the Center for Countering Digital Hate found that Instagram’s own algorithm was prone to spreading misinformation around the vaccine.

As fake news becomes more difficult to detect, healthcare professionals are fighting an uphill battle when it comes to conveying accurate information to their patients, staff members, shareholders and local communities. If hospitals and healthcare facilities want to take on “bad actors” intentionally spreading fake news they should keep track of unusual online behavior, and monitor various platforms to understand typical patient behaviors and values.

How healthcare facilities can fight disinformation
With more people at home than ever before, social media activity is at an all-time high making it important to understand the dangers of disinformation. Uncovering and analyzing disinformation campaigns and their impact on how everyday consumers perceive public health efforts can help healthcare professionals decide the best approach to engaging with patients. Healthcare providers looking to examine these types of campaigns can use AI software to dive further into online conversations, uncover sources and reach, and curb the negative effects of disinformation.

Analyzing conversations on vaccine rollout, efficacy, local and national covid case rates, and other high-profile topics surrounding the pandemic, can help healthcare administrators discover trends and bring more-in-depth insights to light. For example, what patient groups are being targeted by these online campaigns? What are their main concerns about the vaccine and how are those concerns likely to manifest at healthcare facilities? From these conversations, tools such as identity vectors can help categorize the accounts spreading the information into Real, Fake or Bad. With the healthcare industry vulnerable to many types of disinformation campaigns, it’s vital to have an understanding of these efforts as they could threaten the credibility of a facility, its staff and patients as well as public health on a broader scale.

Fake news travels fast, making it important for those in the healthcare industry to monitor false information as it spreads online. With so much content available at our fingertips, it can be difficult to differentiate the real conversations from the fake; but with monitoring and detection, there’s a possibility of stopping the spread in its tracks. Familiarizing yourself with audience behavior can help identify red flags, helping to catch false narratives that could threaten your facility, staff and community sooner rather than later.


About the author: Dan Brahmy is the Co-founder and CEO of Cyabra, a SaaS platform that uses AI to measure impact and authenticity within online conversations. Prior to Cyabra, Dan served as a Senior Strategy Consultant at Deloitte Digital and a summer Business Associate at Google EMEA. Since founding Cyabra, Dan and his team have helped brands analyze conversations and unravel hidden insights to identify and categorize disinformation, deepfakes, and the types of accounts they’re coming from (real, bad, or fake). Dan received his B.A. in Business Administration and Marketing from IDC Herzliya.