20  AI, Misinformation, and the Infodemic

NoteLearning Objectives

By the end of this chapter, you will be able to:

  1. Understand how generative AI has transformed the health misinformation landscape
  2. Identify characteristics of AI-generated health misinformation
  3. Apply detection and verification techniques to assess content credibility
  4. Implement evidence-based counter-misinformation strategies
  5. Develop organizational protocols for rapid response to health misinformation
  6. Navigate ethical and policy considerations in combating misinformation while respecting free expression
TipTime Estimate

Reading and exercises: 60-75 minutes Hands-on project: 90-120 minutes Total: 2.5-3 hours

ImportantPrerequisites

This chapter builds on:

  • Chapter 10: Ethics and Responsible AI (?sec-ethics)
  • Chapter 17: Policy and Governance (?sec-policy)
  • Chapter 19: Large Language Models in Public Health: Theory and Practice (?sec-llm-theory-practice)

You should be familiar with how LLMs work, ethical considerations in AI deployment, and policy frameworks.

20.1 What You’ll Learn

This chapter examines the intersection of artificial intelligence and health misinformation—one of the most critical challenges in modern public health. We’ll explore how AI systems both generate and amplify false health information, technologies and strategies for detection and verification, evidence-based counter-measures, and policy considerations for practitioners navigating this complex landscape.


20.2 Introduction: The Information Crisis in Public Health

On January 30, 2020, the World Health Organization declared not only a Public Health Emergency of International Concern for COVID-19, but also warned of an accompanying “infodemic”—an overabundance of information, both accurate and false, that makes it difficult for people to find trustworthy sources and reliable guidance (WHO, 2020). The COVID-19 pandemic demonstrated that controlling disease spread requires not only epidemiological interventions but also effective management of the information ecosystem.

The emergence of powerful generative AI systems in 2023-2024—particularly large language models like GPT-4, Claude, and Gemini, alongside image generation tools like DALL-E 3 and Midjourney—has fundamentally altered the misinformation landscape. These technologies enable the creation of convincing, personalized false health content at unprecedented scale and sophistication (Starbird et al., 2023). A single actor can now generate thousands of fake articles, manipulated images, and synthetic videos in hours, each tailored to specific audiences and platforms.

20.2.1 The Stakes for Public Health

The stakes are profound:

  • Misinformation undermines vaccination programs, promoting ineffective or dangerous treatments
  • Erodes trust in health institutions, particularly among vulnerable populations
  • Exacerbates health inequities by disproportionately affecting populations with lower health literacy (Loomba et al., 2021; Pierri et al., 2022)
WarningSurgeon General’s Advisory

The U.S. Surgeon General issued an advisory in 2021 declaring health misinformation “an urgent threat to public health” (U.S. Surgeon General, 2021). Subsequent years have only intensified this challenge with the advent of generative AI.


20.3 The Landscape of AI-Generated Health Misinformation

20.3.1 Types of AI-Generated Health Misinformation

20.3.1.1 Text-Based Misinformation

Large language models can generate medically plausible but false content indistinguishable from human writing:

Examples include:

  • Fake scientific papers: LLMs generate articles with fabricated studies, citations, and statistical analyses that appear legitimate (Else, 2023, Nature)
  • Misleading health advice: Convincing but wrong information about treatments, diagnoses, and preventive measures
  • Fabricated expert testimony: Synthetic quotes attributed to non-existent doctors or researchers
  • Emotionally manipulative narratives: Personal “testimonials” designed to exploit fears and anxieties
ImportantDetection Challenge

A 2023 study found that medical professionals could not reliably distinguish AI-generated medical text from human-written content, with accuracy rates around chance levels (Gao et al., 2023, npj Digital Medicine).

20.3.1.2 Visual Misinformation

Image generation and manipulation tools enable creation of:

  • Fake medical imaging: Synthetic X-rays, MRIs, or pathology slides showing non-existent conditions
  • Manipulated data visualizations: Charts and graphs with fabricated statistics that appear authoritative
  • Before/after deceptions: Synthetic images showing dramatic but false treatment results
  • Fake product images: Counterfeit medications or devices that don’t exist

20.3.1.3 Deepfakes and Synthetic Media

Video and audio synthesis technologies create:

  • Deepfake videos: Fabricated footage of health officials, doctors, or scientists making false statements (Westerlund, 2019)
  • Voice cloning: Spoofed audio of trusted figures providing dangerous medical advice
  • Synthetic influencers: Entirely artificial personas building audiences to spread misinformation

The technology for creating convincing deepfakes has become accessible to non-experts, with numerous apps and services available for minimal cost (Kietzmann et al., 2020).

20.3.1.4 Amplification Through AI-Powered Bots

Beyond content creation, AI powers distribution mechanisms:

  • Social media bots: Automated accounts that share, comment, and amplify misinformation to create false consensus (Broniatowski et al., 2018, AJPH)
  • Coordinated inauthentic behavior: Networks of AI-controlled accounts that manipulate trending topics and algorithmic recommendations
  • Personalized targeting: AI systems that identify vulnerable users and deliver tailored misinformation based on psychological profiles

20.3.2 The Psychology of AI-Generated Misinformation

AI-generated misinformation exploits well-documented cognitive biases:

Familiarity and repetition effects: Repeated exposure to false claims increases perceived truthfulness, regardless of actual veracity (Pennycook et al., 2018). AI enables unprecedented repetition at scale.

Emotional arousal: Content triggering fear, anger, or disgust spreads more rapidly than neutral information (Vosoughi et al., 2018, Science). AI systems can optimize content for emotional impact.

Confirmation bias: People preferentially accept information confirming existing beliefs (Nickerson, 1998). AI-powered targeting delivers precisely the misinformation each user is predisposed to believe.

Source credibility heuristics: People use shortcuts like credentials, affiliations, and production quality to assess trustworthiness (Metzger & Flanagin, 2010). AI-generated content mimics these credibility markers perfectly.

20.3.3 Why AI Misinformation is Particularly Dangerous

Several factors make AI-generated misinformation uniquely threatening:

  1. Scale: Automated generation enables floods of content overwhelming fact-checkers and moderation systems
  2. Personalization: AI tailors content to individual vulnerabilities, making it more persuasive
  3. Sophistication: Advanced language models produce content indistinguishable from expert writing
  4. Cost reduction: Creating convincing misinformation no longer requires skilled propagandists
  5. Plausible deniability: Synthetic content is difficult to definitively attribute to specific actors
  6. Speed: AI generates and adapts content faster than human-led response efforts

20.4 Detection and Verification Technologies

20.4.1 AI Detection Tools

The irony of fighting AI with AI is not lost on researchers. Several approaches exist for detecting synthetic content:

20.4.1.1 Statistical Patterns and Artifacts

AI-generated text often exhibits:

  • Unusual word frequency distributions
  • Lack of factual grounding or specific details
  • Overly formal or generic language
  • Inconsistent writing style within documents

Tools like GPTZero, Originality.AI, and OpenAI’s own detector (discontinued due to low accuracy) attempt to identify these patterns, though with limited success (Kirchner, 2023, The Markup). Detection accuracy varies widely, with false positive rates of 10-30% common.

20.4.1.2 Digital Forensics for Images and Videos

Detection methods for synthetic visual content include:

  • Inconsistencies in lighting and shadows: AI systems sometimes fail to render physically plausible illumination
  • Biometric irregularities: Unnatural blinking patterns, inconsistent facial movements in deepfakes (Li et al., 2018)
  • Compression artifacts: Different patterns between AI-generated and camera-captured images
  • Metadata analysis: Absence of expected EXIF data or presence of generation signatures

Tools like Microsoft Video Authenticator, Sensity, and Reality Defender provide deepfake detection services, though accuracy degrades as generation technology improves.

20.4.1.3 Content Provenance and Authentication

Rather than detecting fakes after creation, provenance systems aim to verify authenticity:

  • Content credentials: Coalition for Content Provenance and Authenticity (C2PA) standard embeds metadata tracking content origins (C2PA, 2023)
  • Digital watermarking: Imperceptible markers identifying AI-generated content (Fernandez et al., 2023)
  • Blockchain verification: Immutable records of content creation and modification history

These approaches show promise but face adoption challenges and can be circumvented by determined actors.

20.4.2 Limitations of Automated Detection

Current detection technologies face significant constraints:

Adversarial evolution: As detection improves, generation models adapt. This arms race favors generators, who only need to defeat detection once, while detectors must catch every fake (Brundage et al., 2020).

False positives: Flagging legitimate content as fake erodes trust in detection systems and can be weaponized to suppress authentic information.

Computational costs: Real-time analysis of billions of social media posts requires enormous resources beyond most organizations’ capacity.

Linguistic and cultural bias: Detection models trained primarily on English content from Western contexts may perform poorly on other languages and cultures.

20.4.3 Human-Centered Verification

Given technological limitations, human expertise remains critical:

Lateral reading: Fact-checkers investigate claims by leaving the source and examining what other reliable sources say (Wineburg & McGrew, 2016). This approach, combined with AI tools that automate information gathering, shows promise.

Structured verification protocols: Organizations like First Draft and the International Fact-Checking Network provide frameworks for systematic assessment (Wardle & Derakhshan, 2017).

Expert consultation: Medical and scientific experts can identify subtle inaccuracies that automated systems miss, particularly domain-specific fabrications.

Crowdsourced verification: Platforms like Community Notes (formerly Birdwatch) on X/Twitter leverage distributed human judgment to provide context on potentially misleading posts (Allen et al., 2022).


20.5 Counter-Strategies and Interventions

20.5.1 Prebunking: Psychological Inoculation

Rather than correcting misinformation after it spreads (debunking), prebunking proactively builds cognitive resistance (Roozenbeek et al., 2020).

Inoculation theory proposes that exposing people to weakened forms of misinformation, along with refutations, helps them recognize and resist manipulation techniques (McGuire, 1964). Research demonstrates effectiveness across contexts:

  • Inoculation interventions reduced susceptibility to manipulation techniques by 5-10 percentage points in large-scale studies (Roozenbeek et al., 2022, Science Advances)
  • “Bad News” game, where players create misinformation, improved identification of manipulative tactics (Basol et al., 2020)
  • Short videos explaining common manipulation techniques (emotional language, false experts, conspiracy logic) reduced sharing intentions for misinformation by 10-15% (Maertens et al., 2021)

20.5.1.1 Practical Implementation

1. Identify common manipulation techniques (emotional appeals, fake experts,
   cherry-picked data, false dilemmas)
2. Create brief warnings explaining these techniques with examples
3. Expose audiences before they encounter specific misinformation
4. Emphasize HOW to think (critical evaluation) vs WHAT to think (specific facts)
5. Make inoculation content engaging and memorable (games, videos, interactive)

20.5.2 Debunking: Effective Correction

When misinformation has already spread, evidence-based debunking is necessary (Lewandowsky et al. 2012):

20.5.2.1 The Truth Sandwich Approach

(Lakoff 2018):

  1. Lead with truth, not the myth
  2. Briefly mention the false claim (if necessary for context)
  3. Return to and reinforce the truth

Example:

❌ Poor: "The myth that vaccines cause autism is false. Studies show no link."

✓ Better: "Vaccines are safe and effective. Extensive research involving millions
of children found no connection between vaccines and autism. Vaccines prevent
serious diseases and save lives."

20.5.2.2 Key Principles

(Lewandowsky et al. 2020):

  • Emphasize facts over myths: Spend more time on truth than falsehoods
  • Provide alternative explanations: Fill the knowledge gap left by removing misinformation
  • Keep it simple: Complex rebuttals are less effective than clear, concise corrections
  • Use visuals: Graphics and infographics enhance retention and sharing
  • Acknowledge emotions: Address fears and concerns underlying belief in misinformation
  • Avoid backfire effects: Simply repeating misinformation can reinforce it (Swire et al. 2017)

20.5.3 Trusted Messengers and Community Engagement

Who delivers information matters as much as what information is delivered (Freimuth et al. 2014).

20.5.3.1 Messenger Hierarchy

(Effectiveness varies by community):

  1. Healthcare providers: Doctors and nurses are consistently most trusted (Thaker and Ganchoudhuri 2021)
  2. Community leaders: Faith leaders, teachers, local officials embedded in communities
  3. Peer influencers: People similar to the target audience (age, background, values)
  4. Subject matter experts: Scientists and researchers (trust varies by community)
  5. Government officials and institutions: Often lowest trust, particularly in marginalized communities

20.5.3.2 Community-Based Strategies

  • Empower local voices: Train community health workers and trusted leaders to address misinformation
  • Meet people where they are: Engage on platforms and spaces people already use (churches, barbershops, social media groups)
  • Two-way dialogue: Listen to concerns rather than lecturing; address root causes of mistrust
  • Cultural tailoring: Messages must reflect community values, language, and experiences
NoteCase Study: CDC Partnership

The CDC’s partnership with Black churches to address COVID-19 vaccine hesitancy demonstrated this approach, increasing vaccination rates in participating communities by 10-15 percentage points (McGowan et al., 2022).

20.5.4 Platform-Based Interventions

Technology platforms have implemented various content moderation approaches:

Labeling and fact-checks: Adding warnings or fact-check notices to problematic content. Meta’s fact-checking partnership reduced sharing of labeled false content by 80% on Facebook (Mosseri, 2020), though effects decay over time.

Reducing algorithmic amplification: Limiting reach of content flagged as potentially false, rather than removing it entirely. YouTube’s modification of recommendation algorithms reduced watch time from borderline content by 70% (YouTube, 2021).

Friction mechanisms: Adding intermediate steps (confirmation dialogs, related articles) before sharing potentially false content reduces sharing by 10-20% (Pennycook et al., 2021, Nature).

Transparency requirements: Requiring disclosure of AI-generated content or synthetic media. As of 2024, several platforms mandate labeling of AI-generated images and videos, though enforcement is inconsistent.

20.5.4.1 Limitations

Platform interventions face challenges including:

  • Difficulty distinguishing legitimate speech from harmful misinformation
  • Varying standards across platforms and regions
  • Concerns about censorship and free expression
  • Sophisticated actors evading detection
  • Resource constraints for content moderation at scale

20.5.5 Rapid Response Systems

Public health organizations need infrastructure for quick misinformation response:

20.5.5.1 Monitoring and Early Detection

  • Social listening tools tracking health-related conversations
  • Automated alerts for emerging misinformation narratives
  • Partnerships with platforms for trend identification

20.5.5.2 Triage and Assessment

  • Evaluate reach, potential harm, and growth rate of misinformation
  • Prioritize response based on risk (viral vaccine misinformation > niche conspiracy theory)
  • Determine appropriate response strategy (ignore, monitor, debunk, report to platform)

20.5.5.3 Coordinated Response

  • Pre-approved messaging templates for common misinformation types
  • Rapid mobilization of trusted messengers
  • Cross-sector coordination (health departments, healthcare systems, community organizations)

20.5.5.4 Evaluation

  • Track metrics: reach of corrections, changes in belief or behavior, reduction in misinformation spread
  • Iterate based on what works in specific contexts
NoteWHO EPI-WIN

The WHO’s EPI-WIN (WHO Information Network for Epidemics) exemplifies this approach, providing early warnings and coordinated responses to health misinformation globally (WHO, 2020).


20.6 Case Studies

20.6.1 COVID-19: The Infodemic Prototype

The COVID-19 pandemic generated unprecedented health misinformation volume and sophistication (Zarocostas, 2020, The Lancet).

20.6.1.1 Early Misinformation Waves (January-March 2020)

  • 5G conspiracy theories linking cell towers to virus transmission
  • Miracle cures (bleach, hydroxychloroquine, ivermectin promoted without evidence)
  • Lab origin claims and bioweapon narratives
  • Misinformation about transmission and prevention

Analysis: Misinformation spread 6 times faster than accurate information on social media during the pandemic’s first months (Kouzy et al., 2020). False claims generated more engagement (likes, shares, comments) than factual posts from health authorities.

20.6.1.2 Vaccine Misinformation (December 2020-ongoing)

  • Fertility myths: False claims vaccines cause infertility or pregnancy complications
  • Tracking chips: Conspiracy theories about government surveillance via vaccines
  • Altered DNA: Misrepresentation of mRNA vaccine mechanism
  • Minimizing disease severity to discourage vaccination

Impact: Exposure to vaccine misinformation reduced vaccination intentions by 6-12 percentage points in experimental studies (Loomba et al. 2021). Counties with higher social media use experienced slower vaccine uptake, particularly where misinformation circulated heavily (Pierri et al. 2022).

20.6.1.3 Response Effectiveness

  • Platform fact-checking labels reduced sharing but didn’t always change beliefs
  • Healthcare provider recommendations most effective at countering hesitancy
  • Community-based interventions outperformed mass media campaigns
  • Rapid correction of false claims more effective than delayed responses

20.6.1.4 Lessons Learned

  1. Speed matters: Early misinformation establishes difficult-to-change beliefs
  2. Pre-existing distrust amplifies misinformation impact
  3. Coordination across sectors (tech, healthcare, government) essential but challenging
  4. One-size-fits-all messaging ineffective; tailored approaches needed

20.6.2 Ivermectin and Hydroxychloroquine: Treatment Misinformation

Despite weak or negative evidence, these drugs became subjects of intense misinformation campaigns (Catalán-Figueroa et al., 2023; Self et al., 2020, NEJM).

20.6.2.1 Misinformation Tactics

  • Cherry-picking preliminary studies while ignoring larger, better-designed trials
  • Misrepresenting FDA guidance and scientific consensus
  • Amplifying anecdotes over systematic evidence
  • Accusations of conspiracy to suppress effective treatments

20.6.2.2 AI’s Role

  • Bots amplified pro-ivermectin hashtags, creating false impression of widespread support
  • LLMs generated articles mimicking medical journals’ style
  • Coordinated networks spread content across platforms simultaneously

20.6.2.3 Consequences

  • Poison control centers reported increased calls about ivermectin overdoses (Bernstein, 2021, Washington Post)
  • Patients delayed proven treatments in favor of unproven alternatives
  • Physician-patient relationships strained by conflicting information sources
  • Resources diverted to addressing preventable harms

20.6.2.4 Response Challenges

  • Initial uncertainty in pandemic’s early days provided opening for misinformation
  • Legitimate scientific debate weaponized to sow doubt
  • Censorship concerns complicated platform moderation
  • Political polarization transformed medical question into identity issue

20.6.3 AI-Generated Medical “Experts” and Fake Clinics

Emerging threat as of 2024: entirely synthetic medical authorities and practices.

20.6.3.1 Examples

  • AI-generated “doctors” on social media dispensing medical advice, building large followings before exposure
  • Fabricated clinics with websites, reviews, and virtual “staff”—all AI-generated—promoting ineffective treatments
  • Synthetic patient testimonials and before/after images for scam products

20.6.3.2 Detection

  • Reverse image searches revealing no prior online presence
  • Inconsistencies across posts and interviews
  • Inability to verify credentials through standard databases
  • Lack of real-world footprint (physical location, public records)

20.6.3.3 Implications

  • Traditional verification methods (checking credentials) become insufficient
  • Consumers face difficulty distinguishing real from synthetic medical information
  • Regulation lags behind technological capabilities
  • Trust in online health information further eroded

20.7 Rebuilding Trust and Institutional Credibility

20.7.1 The Trust Deficit in Public Health

Public trust in health institutions has declined markedly:

  • Trust in CDC fell from 71% pre-pandemic to 44% by 2023 in U.S. surveys (Montanaro, 2021, NPR)
  • Confidence in scientists decreased, particularly along partisan lines (Funk & Tyson, 2022, Pew Research)
  • Faith in public health messaging eroded during COVID-19 due to evolving guidance, perceived inconsistencies, and politicization

20.7.1.1 Root Causes

  1. Historical harms: Tuskegee syphilis study, forced sterilizations, and other abuses create justified skepticism, particularly in Black, Indigenous, and marginalized communities (Gamble, 1997, AJPH)
  2. Structural inequities: Health systems that inadequately serve certain populations breed distrust
  3. Communication failures: Inconsistent, overly technical, or paternalistic messaging alienates audiences
  4. Politicization: Partisan capture of public health issues undermines scientific authority
  5. Misinformation ecosystem: Constant exposure to contradictory information creates confusion

20.7.2 Transparency and Honest Communication

Evidence-based principles for rebuilding trust (Covello 2006):

20.7.2.1 Acknowledge Uncertainty

Clearly communicate what is known, unknown, and actively being investigated. Pretending certainty when it doesn’t exist backfires when information evolves.

Example: “Current evidence suggests [X]. However, we’re still learning about [Y]. We’ll update guidance as we learn more.”

20.7.2.2 Explain Changes

When recommendations shift, explain why based on new evidence rather than simply issuing new directives.

Example: “Early in the pandemic, we didn’t recommend masks because [limited supply, unclear evidence]. As we learned more about asymptomatic transmission and ensured healthcare workers had supplies, guidance changed to recommend masks for everyone.”

20.7.2.3 Address Concerns Directly

Don’t dismiss worries as irrational. Acknowledge underlying fears and provide empathetic, evidence-based responses.

❌ "Vaccine side effects are extremely rare; you're worrying about nothing."

✓ "It's understandable to worry about side effects. Let me explain what we know
about risks, which are much lower than risks from the disease itself."

20.7.2.4 Admit Mistakes

When errors occur, acknowledge them promptly and transparently, explaining corrective actions.

20.7.2.5 Provide Context

Help audiences understand how science works—that evolving knowledge leads to changing guidance.

20.7.3 Addressing Root Causes of Mistrust

Surface-level communication improvements are insufficient without addressing deeper issues:

Invest in health equity: Disparities in health outcomes and healthcare access fuel distrust. Communities receiving inadequate services reasonably question whether institutions prioritize their wellbeing (Bailey, Feldman, and Lewis 2017).

Community partnership vs paternalism: Genuine collaboration with communities in designing programs, not just delivering predetermined messages.

Long-term engagement: Trust builds through sustained presence and relationship-building, not just during crises.

Cultural humility: Recognize limitations of one’s own cultural perspective; learn from communities’ lived experiences and knowledge.

Accountability: Transparent reporting on health outcomes by demographic groups; clear mechanisms for addressing concerns and grievances.

20.7.4 Digital Literacy and Critical Thinking

Equipping the public to navigate the information environment:

20.7.4.1 Media Literacy Programs

Teaching skills to evaluate source credibility, identify manipulation techniques, and verify claims (Breakstone et al. 2021).

Effective approaches:

  • Interactive exercises (evaluating real examples)
  • Focus on HOW to think, not WHAT to think
  • Practice lateral reading and fact-checking techniques
  • Age-appropriate instruction in schools

20.7.4.2 Health Literacy

Building capacity to find, understand, and use health information (Berkman et al. 2011).

Key components:

  • Understanding medical terminology and concepts
  • Numeracy for interpreting statistics and risks
  • Recognition of quality indicators for health information sources
  • Skills for discussing health decisions with providers

Challenges: Literacy programs are long-term investments with diffuse benefits, making funding and prioritization difficult. Evaluation is complex, as effects on actual behavior (not just knowledge) are ultimate metrics.


20.8 Policy and Governance

20.8.1 Regulatory Approaches to AI-Generated Misinformation

Governments worldwide are developing policies, though approaches vary significantly:

European Union: The Digital Services Act (2022) requires large platforms to assess and mitigate risks from misinformation, with significant penalties for non-compliance (European Union 2022). The AI Act (2024) mandates disclosure of AI-generated content and prohibits certain manipulative AI applications.

United States: Section 230 of Communications Decency Act provides broad immunity to platforms for user-generated content, complicating efforts to hold platforms accountable for misinformation (Kosseff 2019). No comprehensive federal AI misinformation regulation as of 2024, though state-level initiatives emerging.

Singapore: Protection from Online Falsehoods and Manipulation Act (2019) empowers government to order corrections or removal of false statements, raising concerns about potential censorship (Tan and Mahbubani 2021).

India: Information Technology Rules (2021) require platforms to remove content flagged by government within specified timeframes, with mixed results balancing misinformation control and free expression (Jain and Verma 2022).

20.8.1.1 Challenges Across Jurisdictions

  • Defining “misinformation” without enabling censorship
  • Balancing public health protection with free speech
  • Jurisdictional complexity when platforms operate globally
  • Keeping regulation current as technology evolves rapidly

20.8.2 Platform Responsibilities

Ongoing debate about appropriate roles and obligations:

20.8.2.1 Arguments for Platform Accountability

  • Amplification algorithms actively promote content, making platforms more than neutral conduits
  • Scale and reach mean platform decisions have massive public health consequences
  • Platforms have access to data and tools unavailable to outside fact-checkers
  • Market dominance creates quasi-public square functions

20.8.2.2 Arguments for Limited Intervention

  • Censorship risks and government overreach concerns
  • Difficulty defining and detecting misinformation at scale
  • Potential for bias in content moderation decisions
  • Free market competition may provide better solutions than regulation

20.8.2.3 Emerging Consensus Elements

  • Transparency about content moderation policies and practices
  • User controls over algorithmic recommendations and content filtering
  • Disclosure requirements for AI-generated content
  • Rapid response mechanisms for health emergency misinformation
  • Independent audits of platform practices and impacts

20.9 Practical Implementation Guide

20.9.1 For Public Health Departments

20.9.1.1 Establish Misinformation Monitoring

1. Select monitoring tools (Meltwater, Brandwatch, CrowdTangle alternatives)
2. Define keywords and topics relevant to your jurisdiction
3. Set alert thresholds for emerging narratives
4. Designate staff responsible for daily monitoring
5. Create escalation protocols for high-risk misinformation

20.9.1.2 Develop Rapid Response Capacity

1. Pre-draft messaging templates for common misinformation types:
   - Vaccine safety concerns
   - Disease transmission myths
   - Treatment misinformation
   - False statistics
2. Identify trusted messengers in your community
3. Establish communication channels for rapid deployment
4. Create approval workflows that allow quick response (hours, not days)
5. Test response system with exercises

20.9.1.3 Build Trusted Relationships

1. Conduct stakeholder mapping: identify community leaders, healthcare providers,
   faith leaders, educators
2. Establish regular communication (quarterly meetings, newsletters)
3. Provide resources and training for partners to address misinformation
4. Create mechanisms for two-way feedback
5. Maintain relationships during non-crisis periods

20.9.1.4 Resource Allocation

  • Small departments (serving <100K): 0.5-1 FTE for misinformation monitoring and response
  • Medium departments (100K-500K): 1-2 FTE with specialized communication staff
  • Large departments (>500K): Dedicated misinformation unit with monitoring, analysis, and response capabilities

20.9.2 For Healthcare Systems

20.9.2.1 Clinician Training

1. Brief counseling techniques for patients influenced by misinformation:
   - Avoid confrontational correction
   - Ask permission to discuss concerns
   - Use motivational interviewing approaches
2. Common misinformation themes and evidence-based responses
3. Resources for patients (handouts, websites, videos)
4. Documentation of misinformation encounters for system-level tracking

20.9.2.2 Patient Education Materials

1. Proactive addressing of common misconceptions
2. Plain language, culturally appropriate formats
3. Available in multiple languages
4. Accessible (written at 6th-8th grade level)
5. Include sources and contact information for questions

20.9.2.3 Social Media Guidelines

1. Clear policies for healthcare workers' personal social media use regarding
   medical topics
2. Institutional accounts' approach to misinformation (when to engage vs ignore)
3. Training for staff managing organizational social media
4. Crisis communication protocols for misinformation targeting the institution

20.9.3 For Researchers and Educators

20.9.3.1 Advancing the Evidence Base

Priority research areas:

  • Effectiveness of different prebunking and debunking approaches across diverse populations
  • Long-term impacts of misinformation exposure on health behaviors
  • Scalable technological solutions for detection and verification
  • Psychological mechanisms underlying misinformation belief and sharing
  • Economic costs of health misinformation to inform policy decisions

20.9.3.2 Teaching Critical Evaluation Skills

1. Incorporate media literacy into health professional curricula
2. Use real-world misinformation examples for case-based learning
3. Teach lateral reading and fact-checking techniques
4. Address epistemology: how we know what we know in medicine and science
5. Prepare students to navigate information ecosystem in their careers

20.10 Summary and Key Takeaways

AI-generated health misinformation represents a fundamental challenge to public health in the 21st century. The technologies enabling creation of convincing false content at scale are widely accessible and rapidly improving. However, evidence-based strategies exist for detection, counter-messaging, and building resilience.

20.10.1 Core Principles

  1. Prevention over correction: Prebunking and inoculation are more effective than debunking after misinformation spreads
  2. Trust is foundational: Without trusted institutions and messengers, even accurate information is dismissed
  3. Technology alone is insufficient: Human judgment, community engagement, and relationship-building remain essential
  4. Speed matters: Rapid response to emerging misinformation prevents establishment of false beliefs
  5. Tailoring is critical: One-size-fits-all approaches fail; messages must reflect community contexts and values
  6. Continuous adaptation: As AI misinformation evolves, detection and response strategies must also evolve

20.10.2 Looking Ahead

The cat-and-mouse game between AI-generated misinformation and detection technologies will continue, likely with generating systems maintaining advantages. Policy and regulation will struggle to keep pace with technological change. The most sustainable approach combines:

  • Technological tools for detection and verification (acknowledging limitations)
  • Evidence-based communication strategies rooted in behavioral science
  • Institutional reforms addressing root causes of mistrust
  • Media literacy education building population-level resilience
  • Multi-stakeholder collaboration spanning technology, healthcare, government, and community sectors

20.10.3 The Bottom Line

Managing health misinformation in the age of generative AI requires sustained investment, cross-sector collaboration, and recognition that this is not a problem with a one-time solution but an ongoing challenge requiring vigilance and adaptation. Public health must develop capacity to operate effectively in an information environment where distinguishing truth from sophisticated fabrication is increasingly difficult. Success depends not just on fighting false information, but on building systems and relationships resilient enough to withstand the infodemic’s ongoing assault.


Check Your Understanding

Test your knowledge of AI-generated misinformation and infodemic management. Click to reveal answers and explanations.

A public health department discovers AI-generated articles claiming a new vitamin supplement cures diabetes. The articles include fabricated citations to non-existent medical journals. What capability of large language models makes this possible?

  1. LLMs have access to real-time medical databases
  2. LLMs can generate plausible but false text including fake citations
  3. LLMs only summarize existing content, so articles must have real sources
  4. LLMs cannot generate medical content due to safety restrictions

Answer: b) LLMs can generate plausible but false text including fake citations

Explanation: Large language models learn statistical patterns from training data and can generate text that appears authoritative without accessing real sources or verifying accuracy.

How LLMs fabricate convincing content:

LLMs generate text based on learned patterns, not knowledge bases. They can produce: - Fake citations: Journal names, authors, DOIs that don’t exist but follow correct formatting - Fabricated statistics: Numbers and percentages that sound plausible - Non-existent studies: Detailed descriptions of research that was never conducted - Synthetic quotes: Attributed to real or fictional experts

Why this is dangerous:

LLM-generated fake article:
"A groundbreaking study published in the Journal of Metabolic Health
(2024;42:177-184) by Dr. Sarah Johnson at Stanford University found
that daily consumption of Supplement X reduced HbA1c by 2.3% in diabetic
patients (p<0.001). The randomized trial of 500 participants showed..."

Reality:
- No "Journal of Metabolic Health"
- No Dr. Sarah Johnson in Stanford diabetes research
- No study with these results
- All details fabricated but formatted correctly

Detection challenges:

Verification requires:
1. Cross-referencing citations (journal exists? article retrievable? authors real?)
2. Checking researchers' actual work (PubMed, Google Scholar)
3. Evaluating biological plausibility (claims consistent with known science?)
4. Consulting domain experts (would specialists consider this credible?)

Time-consuming process that most consumers won't undertake

Real-world example: In 2023, researchers found that ChatGPT could generate fake medical abstracts that fooled medical professionals 32% of the time, with participants unable to reliably distinguish AI-generated from human-written content (Gao et al. 2023).

Lesson: LLMs are sophisticated pattern-matching systems, not fact-checking databases. They generate text that sounds authoritative without verifying truthfulness. Public health practitioners must verify suspicious content through independent fact-checking, never assuming professional-looking text is accurate. Always check citations, cross-reference claims, and consult primary sources.

A county health department wants to prepare residents for expected vaccine misinformation before a new vaccine rollout. Which approach is MOST effective?

  1. Wait for misinformation to appear, then quickly debunk it
  2. Pre-expose residents to weakened forms of misinformation with refutations (inoculation)
  3. Provide only positive information about vaccines without mentioning misinformation
  4. Rely on social media fact-checking labels to correct false claims

Answer: b) Pre-expose residents to weakened forms of misinformation with refutations (inoculation)

Explanation: Prebunking (inoculation) is more effective than debunking because it builds cognitive resistance before exposure to misinformation.

Inoculation theory: Like biological vaccines, psychological inoculation exposes people to weakened forms of misinformation along with refutations, helping them recognize and resist manipulation techniques when encountered later (Roozenbeek et al. 2020).

How inoculation works:

Traditional approach (debunking):
1. Misinformation spreads
2. People believe it
3. Correction attempts (often ineffective due to backfire effects)
4. Belief persists in many people

Inoculation approach (prebunking):
1. Before misinformation spreads, show people examples of manipulation
2. Explain techniques: emotional appeals, fake experts, cherry-picked data
3. Provide refutations and teach critical thinking
4. When real misinformation encountered, people recognize tactics
5. Resistance to persuasion built

Evidence base: - Meta-analysis of inoculation studies: average effect size of 0.4-0.5, reducing susceptibility to misinformation by 5-10 percentage points (Roozenbeek et al. 2022) - Effects persist over time (weeks to months), unlike debunking which often has temporary impact - Works across diverse contexts: vaccines, climate change, election misinformation

Practical implementation:

Pre-rollout inoculation campaign (2-4 weeks before vaccine):

Example prebunking message:
"Before the new vaccine launches, you may see false claims using these tactics:
1. Emotional stories: Dramatic personal accounts of harm without evidence
2. Fake experts: People with credentials in unrelated fields claiming expertise
3. Cherry-picked data: Highlighting rare events while ignoring overall safety

Example: 'The vaccine caused my friend's cousin's illness!' (anecdote, not data)
Reality: Careful studies of millions show vaccines are safe and effective.

Remember: Check sources, look for scientific consensus, ask your doctor."

Why other answers are less effective:

  • a) Debunking after spread:
    • Misinformation establishes belief first (primacy effect)
    • Corrections must overcome existing beliefs (harder than preventing formation)
    • Backfire effects possible: repeating misinformation can reinforce it
    • Always playing catch-up (reactive, not proactive)
  • c) Only positive information:
    • Leaves people unprepared for misinformation they’ll inevitably encounter
    • Creates knowledge gap when hearing alarming claims elsewhere
    • Appears to be hiding information if concerns go unaddressed
    • Misses opportunity to build critical thinking skills
  • d) Social media fact-checking labels:
    • Useful but insufficient: labels only reach those already exposed
    • Partisan effects: some users distrust fact-checkers
    • Doesn’t teach people HOW to evaluate claims independently
    • Can’t keep pace with volume of misinformation

Combining approaches: Best practice uses layered strategy: 1. Primary: Inoculation before misinformation spreads widely 2. Secondary: Rapid debunking for emergent false claims that gain traction 3. Tertiary: Trusted messenger engagement for one-on-one conversations with hesitant individuals

Real-world success: WHO prebunking campaign before COVID-19 vaccine rollout in several countries correlated with 8-12% higher vaccination rates compared to matched regions without prebunking (World Health Organization 2021).

Lesson: An ounce of prevention is worth a pound of cure. Inoculation builds cognitive resistance before misinformation takes hold, making it more effective than trying to correct beliefs after they’ve formed. Public health should invest in prebunking as routine practice, not just crisis response.

True or False: AI detection tools can reliably identify all AI-generated health misinformation with >95% accuracy.

Answer: False

Explanation: Current AI detection technologies have significant limitations, with accuracy varying widely and no tool achieving consistent >95% reliability across contexts.

Why detection is challenging:

1. Adversarial evolution:

Arms race dynamic:
- Detectors improve → Generators adapt to evade detection
- Cat-and-mouse game favors generators (asymmetric warfare)
- Generators only need to fool detector once; detectors must catch every fake

Example:
GPTZero launches (detects GPT-3 text) → Users use GPT-4 or prompt engineering
to avoid detection → GPTZero updates → Users find new workarounds

2. False positive problem:

Real-world performance:
- GPTZero: ~70-80% accuracy, 10-15% false positive rate
- OpenAI's own detector: discontinued in 2023 due to "low rate of accuracy"
- Originality.AI: ~80-85% accuracy on test sets, lower on real-world content

False positives are dangerous:
- Legitimate health information flagged as fake
- Erodes trust in detection systems
- Can be weaponized to suppress accurate information

3. Context dependency: Detection accuracy varies by: - Language: Models trained on English perform poorly on other languages - Domain: Medical text differs from other content, affecting detection - Generation method: Different LLMs have different detection signatures - Prompt engineering: Careful prompting can produce text that evades detection

4. Human writing mimics AI:

Ambiguous cases:
- Formulaic medical writing (standardized terminology)
- Text edited by both humans and AI (hybrid content)
- Human authors using AI-assisted writing tools

Result: Unclear boundary between "AI-generated" and "human-written"

Current state of detection technologies:

Text detection:

Best tools: 70-85% accuracy
Methods:
- Statistical patterns (word frequency, sentence structure)
- Perplexity analysis (how "surprised" model is by text)
- Watermarking (only works if generator embeds watermarks)

Limitations:
- No universal watermarking standard
- Sophisticated users can evade detection
- Evolves rapidly as new models release

Image/video detection (deepfakes):

Best tools: 60-90% accuracy (varies by deepfake quality)
Methods:
- Facial biometrics (blinking patterns, micro-expressions)
- Lighting inconsistencies
- Compression artifacts
- Temporal coherence (frame-to-frame consistency)

Limitations:
- High-quality deepfakes increasingly undetectable
- Computational cost for real-time analysis
- New generation techniques emerge constantly

What works better than pure automation:

Hybrid approaches: 1. AI-assisted human verification: Tools flag suspicious content for human expert review 2. Provenance tracking: C2PA standard embeds metadata about content origins 3. Crowdsourced verification: Community Notes model distributes verification 4. Multi-signal assessment: Combine automated detection + source reputation + claim verification

Practical recommendations:

Instead of relying on detection:

✓ Verify claims through authoritative sources
✓ Use lateral reading (check what other credible sources say)
✓ Look for primary sources (original studies, not summaries)
✓ Check author credentials through independent databases
✓ Assess biological plausibility with domain experts
✓ Consider motivations (who benefits from this claim?)

Future directions: - Better detection as research progresses, but unlikely to achieve perfect accuracy - Provenance standards (C2PA) may help if widely adopted - Regulatory requirements for AI-generated content labeling - Focus shifting from “Is this AI?” to “Is this accurate?”

Lesson: Don’t assume detection tools are silver bullets. They’re helpful screening mechanisms but not definitive arbiters. False confidence in flawed detection is dangerous—may miss sophisticated fakes while flagging legitimate content. Best approach: Combine imperfect automated tools with human expertise and systematic verification practices. Teach critical evaluation skills rather than depending solely on technology.

A rural health department wants to address vaccine hesitancy. Who is likely to be the MOST effective messenger?

  1. CDC director in a national press conference
  2. Local doctors and nurses patients already know and trust
  3. Celebrity influencers with large social media followings
  4. Government officials from the county health department

Answer: b) Local doctors and nurses patients already know and trust

Explanation: Personal healthcare providers consistently rank as most trusted sources for health information across diverse populations.

Trust hierarchy (evidence-based):

Most effective: 1. Personal physicians/nurses: 60-75% trust across surveys (Thaker and Ganchoudhuri 2021) - Established relationship and rapport - Direct knowledge of patient’s health context - Perceived as putting patient’s interests first - Can address individual concerns personally

Moderately effective: 2. Community leaders: 40-60% trust (varies by community) - Faith leaders in religious communities - Teachers and school administrators - Local business owners - Trusted elders in cultural communities

  1. Peer influencers: 30-50% trust
    • People similar in age, background, experiences
    • “People like me” effect
    • Effective for younger demographics via social media

Less effective: 4. Subject matter experts: 30-45% trust (declining, varies by politics) - Scientists and researchers - Academic authorities - Trust has become politicized

  1. Government and public health officials: 20-40% trust
    • CDC, WHO, national health agencies
    • County/state health departments
    • Lowest trust in marginalized communities with history of medical abuse

Why local healthcare providers are most effective:

1. Relationship-based trust:

"My doctor knows me and my health history"
- Personal connection (not anonymous authority)
- Demonstrated care over time
- Individual health advice (not generic messaging)
- Can answer questions in real-time

2. Perceived credibility:

"My doctor has nothing to gain from lying to me"
- Financial incentives align with patient health
- Professional oath and ethics
- Reputation depends on patient outcomes
- Direct accountability

3. Accessibility and dialogue:

"I can ask my doctor questions and get answers"
- Two-way communication
- Address specific concerns
- Tailored to individual context
- Opportunity for follow-up

Evidence from research:

COVID-19 vaccine studies: - Patients who discussed vaccines with their provider: 70-80% vaccinated - Patients who didn’t discuss with provider: 40-50% vaccinated - Provider recommendation increased odds of vaccination by 2.5-3x (Thaker and Ganchoudhuri 2021)

Vaccine hesitancy interventions: - One-on-one conversations with trusted healthcare providers most effective intervention - Mass media campaigns from government: minimal impact - Celebrity endorsements: limited and sometimes backfire

Why other options are less effective:

a) CDC director in national press conference: - Distant, impersonal - No relationship with individual - Can’t address specific concerns - Lower baseline trust in federal officials - Perceived as political or bureaucratic

c) Celebrity influencers: - No medical expertise (perceived as unqualified) - Financial motives suspected (paid endorsements?) - Parasocial relationship (not real trust) - Can backfire if celebrity controversial - Works for awareness, not persuasion on complex medical decisions

d) County health department officials: - Better than federal, but still government - Institutional, not personal - Lower trust than individual providers - Can’t tailor to individual health situations - History of mistrust in some communities (Tuskegee, forced sterilization)

Practical implementation:

For health departments:

Don't:
- Rely solely on government messaging
- Assume official status = trust
- Use one-way communication

Do:
- Empower local healthcare providers with resources
- Provide messaging toolkits for clinicians
- Support provider-patient conversations
- Partner with community health centers
- Train community health workers (trusted peer messengers)

For healthcare systems:

Strategies to leverage provider trust:
1. Allocate time for vaccine conversations (don't rush)
2. Provide providers with up-to-date FAQs and evidence
3. Train on motivational interviewing techniques
4. Document and address common concerns
5. Encourage providers to share vaccination status (modeling)

When providers aren’t trusted:

In communities with justified medical mistrust (historical abuse), also engage: - Community health workers: Peers from same community - Faith leaders: Trusted figures with moral authority - Traditional healers: Respected in some cultural contexts - Family matriarchs/patriarchs: Decision-makers in some cultures

Multi-messenger approach:

Optimal strategy combines:
1. Primary: Personal healthcare provider (individual counseling)
2. Secondary: Community leaders (cultural context, values alignment)
3. Tertiary: Peer influencers (social proof, "people like me")
4. Supporting: Public health officials (consistent messaging, resources)

But provider remains cornerstone

Lesson: Trust is earned through relationships, not institutional authority. In an era of widespread mistrust in institutions, the personal physician-patient relationship remains the strongest foundation for health communication. Public health strategies should prioritize empowering trusted local messengers rather than relying on top-down, government-led campaigns. Meet people where they are, through voices they already trust.

Your health department detects rapidly spreading misinformation claiming a new foodborne illness outbreak is caused by a specific brand, when investigation shows contamination is multi-source and brand-unspecified. The false claim is being amplified by bots. What should be your FIRST priority?

  1. Contact the platform to remove all posts mentioning the brand
  2. Issue a detailed technical report explaining the investigation methodology
  3. Release a brief, clear statement with what is known, unknown, and protective actions
  4. Wait until investigation is complete before saying anything

Answer: c) Release a brief, clear statement with what is known, unknown, and protective actions

Explanation: Speed and clarity matter in rapid response. The priority is providing accurate, actionable information quickly, not achieving perfect completeness or removing all false content.

Why option C is correct:

1. Speed beats perfection in outbreak communication:

Misinformation vacuum:
- Misinformation spreads 6x faster than corrections
- Delayed response allows false narrative to establish
- First information encountered has primacy effect (shapes beliefs)
- People seek information during uncertainty - fill vacuum with truth

Timing:
- Initial response: Within 1-2 hours of detecting viral misinformation
- Can't wait for complete investigation (may take days/weeks)
- Update as new information emerges

2. Transparency builds trust:

Acknowledge uncertainty:
"What we know:
- Multistate foodborne illness outbreak under investigation
- Multiple brands/sources implicated
- Investigation ongoing

What we don't yet know:
- Exact contamination source
- Complete list of affected products

What you should do:
- Avoid [food type] until further notice
- If symptoms develop: [guidance]
- Check [website] for updates"

This honesty:
- Prevents appearance of cover-up
- Sets expectations for evolving information
- Shows humility and credibility
- Gives people actionable steps

3. Focus on protective actions:

People need to know:
- Am I at risk?
- What should I do?
- Where can I get updates?

Not:
- Complex methodology
- Statistical analysis
- Bureaucratic processes

Action-oriented messaging:
"To protect yourself:
1. Don't eat [specific food type] until further notice
2. If you have symptoms: [call doctor/go to ER]
3. Check [health department website] for daily updates
4. Report illnesses: [contact info]"

Why other options are problematic:

a) Contact platform to remove all posts:

Problems: - Slow (platforms take hours/days to respond) - Incomplete (can’t catch everything) - Creates “Streisand effect” (censorship draws more attention) - Doesn’t provide alternative narrative (leaves information vacuum) - First Amendment concerns in US - May miss accounts not on monitored platforms

Better approach: - Report most harmful/bot networks to platforms - BUT simultaneously release truth (don’t wait for removal) - Provide accurate information to compete with misinformation - Algorithmic demotion > removal (less backlash)

b) Issue detailed technical report:

Problems: - Too slow (takes hours/days to write) - Too complex (public won’t read 20-page technical document) - Misses window for initial correction - Doesn’t address immediate public concern - Scientific jargon alienates audience

Better approach: - Brief initial statement (option C) - Technical details later for media/stakeholders - Multiple communication products for different audiences: - General public: Simple, actionable - Media: More detail, technical contacts - Healthcare providers: Clinical guidance - Industry: Investigation details

d) Wait until investigation complete:

Problems: - Investigation may take weeks - Misinformation fills information vacuum - Public makes decisions based on false information - Trust erodes (“why aren’t they telling us?”) - Outbreak may worsen without interim protective actions - Appears defensive or covering up

Better approach: - Communicate what you know NOW - Update as investigation progresses - Explain evolving information is normal in investigations - Set expectation for timeline

Evidence-based rapid response framework:

FIRST 2 HOURS:

1. Assess misinformation:
   - Reach and growth rate
   - Potential harm level
   - Amplification mechanisms (bots?)

2. Coordinate rapid statement:
   - Known facts
   - Unknown but under investigation
   - Protective actions
   - Update timeline

3. Deploy through multiple channels:
   - Health department website (primary source)
   - Social media (reach affected audiences)
   - Media alert (amplification)
   - Partner networks (clinicians, community groups)

WITHIN 24 HOURS:

4. Expand messaging:
   - Fact sheet for media
   - Clinical guidance for providers
   - Industry communication (if relevant)
   - Social media graphics (shareable)

5. Monitor spread:
   - Track misinformation evolution
   - Measure reach of corrections
   - Identify new narratives

6. Engage trusted messengers:
   - Local healthcare providers
   - Community leaders
   - Partner organizations

ONGOING:

7. Regular updates:
   - Daily during acute phase
   - As investigation progresses
   - When new information available

8. Retrospective:
   - What worked / didn't work
   - Update templates for next time
   - Document lessons learned

Real-world example:

Romaine lettuce E. coli outbreak (2018):

What happened: - CDC initially said “avoid all romaine” - Later specified only certain regions affected - Public confused by changing guidance - Trust eroded

What would work better: “Initial alert: We’re investigating E. coli linked to romaine. Out of caution, avoid romaine until we identify specific source. We’ll update within 48 hours with more specific guidance as investigation progresses.”

Then update as information emerges, explaining why guidance evolves.

Lesson: In outbreak communication during active misinformation, speed + transparency + actionability beat completeness + perfection. Provide what you know now, acknowledge uncertainty, give protective actions, and promise updates. Information vacuum is the enemy—fill it with truth before misinformation does. You can always refine messaging as investigation progresses, but you can’t reclaim the first critical hours when false narratives establish.


Further Resources

20.10.4 Academic Resources

  • First Draft Essential Guide to Understanding Information Disorder: https://firstdraftnews.org/
  • WHO Managing Infodemics: https://www.who.int/teams/risk-communication/infodemic-management
  • Stanford Digital Repository on Misinformation: https://cyber.fsi.stanford.edu/

20.10.5 Tools and Platforms

  • Content Provenance: Coalition for Content Provenance and Authenticity (C2PA) - https://c2pa.org/
  • Fact-checking Networks: International Fact-Checking Network - https://www.poynter.org/ifcn/
  • Media Literacy: SIFT Method by Mike Caulfield - https://hapgood.us/

20.10.6 Training Programs

  • WHO Training in Risk Communication: https://www.who.int/teams/risk-communication
  • CDC Crisis and Emergency Risk Communication (CERC): https://emergency.cdc.gov/cerc/
  • News Literacy Project: https://newslit.org/