Preface
AI as a Tool, Not a Revolution
Public health has always been about translation, converting complex science into actionable intervention, transforming data into decisions, bridging the gap between what we know and what we do.
Artificial intelligence is entering our field not as a revolution, but as another tool requiring translation.
The challenge isn’t whether AI will change public health practice. It already has. Outbreak detection algorithms scan billions of social media posts. Predictive models forecast disease spread across continents. Natural language processing extracts insights from unstructured clinical notes. Chatbots deliver health information at scale.
The real challenge is this: How do we, as public health professionals, evaluate these tools critically, deploy them responsibly, and understand their limitations honestly without needing computer science degrees?
This handbook attempts that translation.
What This Handbook Is Not
This is not a technical manual for building state-of-the-art AI systems. If you want to architect neural networks or optimize gradient descent algorithms, excellent resources exist elsewhere.
This is not a manifesto claiming AI will solve all public health challenges. It won’t. Many of our field’s hardest problems—health inequity, structural determinism, inadequate funding, political barriers to evidence-based policy—are fundamentally human problems that no algorithm can fix.
This is not a catalog of futuristic possibilities. I’ve tried to focus on what exists now, what works (sometimes), what fails (often), and what we can actually implement with real data, real constraints, and real public health infrastructure.
What This Handbook Attempts
I’ve tried to write the resource I needed three years ago when I started encountering AI applications in surveillance work and realized I didn’t have a framework to evaluate them.
This handbook attempts to:
- Demystify without oversimplifying — AI isn’t magic, but it’s also not just “fancy statistics”
- Show working examples, not just concepts — Every major technique includes code you can run and modify
- Acknowledge failures as loudly as successes — Most AI projects fail. Learning why matters more than celebrating the rare successes
- Ground everything in public health context — The technical details matter less than understanding when a tool is appropriate for your specific problem
- Remain honest about uncertainty — I don’t have all the answers. The field is evolving faster than any handbook can track
A Note on Incompleteness
This handbook will always be incomplete. AI capabilities shift monthly. New applications emerge constantly. Techniques that seemed promising become obsolete. Regulations change.
I’ve chosen to release this as a living document rather than wait for comprehensiveness that will never come. Better to have an honest, evolving resource than a polished artifact that’s outdated before publication.
If you find errors, better approaches, missing perspectives, or implementations that contradict what I’ve written—please contribute. Public health has always been a collaborative field. This handbook should be too.
On Reading This Book
You don’t need to read sequentially. Jump to whatever section addresses your immediate problem. Use the search function. Skip the technical details if you just need conceptual understanding. Dive deep into the code if you’re implementing something.
Treat this as a field guide, not a textbook. Dog-ear the pages (digitally). Write in the margins (via GitHub comments). Adapt the examples to your context.
Most importantly: stay skeptical. Question the tools, the hype, the limitations I’ve described, and even the frameworks I’ve proposed. Public health has suffered before from adopting technologies uncritically. We can’t afford to make the same mistake with AI.
How to Use the Chapter Summaries (TL;DRs)
Every chapter begins with an expandable 📋 Chapter Summary (TL;DR) section designed to help you quickly grasp key concepts before diving deep—or to serve as a standalone reference when time is limited.
Three Reading Strategies:
- The Quick Scan (5 minutes per chapter):
- Read only the TL;DR section
- Use this when evaluating whether a chapter addresses your immediate need
- Perfect for decision-makers who need the “bottom line” without technical details
- Best for: Policymakers, directors, or practitioners doing initial triage
- The Strategic Deep Dive (30-60 minutes per chapter):
- Read the TL;DR first to understand the landscape
- Then read the full chapter for technical depth, code examples, and case studies
- Use this when implementing AI systems or developing domain expertise
- Best for: Epidemiologists, data scientists, implementers building systems
- The Reference Lookup (2-3 minutes per concept):
- Use search to find relevant chapters
- Jump to the TL;DR for quick answers to specific questions (“What metrics matter for LLM evaluation?” “How do I detect model drift?”)
- Return to full chapter only if you need implementation details
- Best for: Anyone troubleshooting problems or validating vendor claims
What’s Inside Each TL;DR:
- The Big Picture: Why this topic matters and the critical context
- Key Frameworks: Core concepts distilled to essential decision points
- Cautionary Tales: Real-world failures and what they teach us (because failures teach more than successes)
- Practical Guidance: What actually works in deployment
- Visual Indicators: Watch for icons highlighting critical warnings ⚠️, success patterns ✅, common failures ❌, and key insights 💡
- The Takeaway: The one thing you must remember from each chapter
A Note on Depth:
The TL;DRs are not superficial summaries—they represent hundreds of hours of synthesis, capturing the essential knowledge needed for informed decision-making. Many practitioners have told me they use only the TL;DRs for 80% of their work and dive into full chapters for the remaining 20% that requires implementation details.
That’s intentional. Your time matters. Use it strategically.
Bryan Tegomoh, MD, MPH Berkeley, California October 2025
This site uses privacy-friendly analytics (Plausible) to track aggregate page views and improve the handbook. No cookies or personal data are collected.