OutreachAI
AI-Powered
Key Outcome
Built a full-stack AI application with dual backends (Next.js + FastAPI), live LLM streaming from Claude and Gemini, and Docker containerization — generating personalized patient outreach across SMS, email, and in-app channels with engagement predictions.

The Challenge
Care coordinators in maternal and women's healthcare need to send personalized outreach messages to patients across multiple channels. Messages must account for clinical context, risk factors, care team information, and lifecycle stage. Manual message crafting is time-intensive and inconsistent.
I wanted to build an application that leverages LLMs to generate channel-appropriate messages with multiple variants, engagement scoring, and clinical reasoning — while supporting multiple AI providers and backend implementations.
What I Built
A full-stack application with two independent backend implementations — a Next.js API route (TypeScript) and a FastAPI service (Python) — both providing the same streaming endpoint:
- Patient-aware generation with 4 realistic profiles including clinical context, risk factors, and interaction history
- Multi-channel output (SMS, email, in-app) with A/B/C variant generation and channel-appropriate formatting
- Engagement scoring with predicted likelihood (high/medium/low) and clinical reasoning for each variant
- Live LLM streaming via Server-Sent Events from both Next.js ReadableStream and FastAPI sse-starlette
- Dual backend architecture — Next.js TypeScript API route and FastAPI Python service with identical behavior
- Docker Compose containerization running both services together
- Demo mode with smart fallback (exact match → goal-match → tone-match → generic) — fully functional without API keys
- Rate limiting, access code gating, input validation, and response validation at the API boundary
- Responsive design with desktop sidebar layout and mobile bottom sheet drawer
- 135 Next.js tests (Vitest) + 56 Python tests (pytest) with CI via GitHub Actions
Tech Stack
Links
