Overview
Proposed 90-Day Plan for the Chief AI Officer Role
Prepared by Joe Scanlin
Make RealMDai the most trustworthy way to turn patient context into a clinically usable plan.
Great AI
Structured reasoning + evidence retrieval + reliable outputs.
Great Workflow
Fast, safe review and signoff for MDs.
Great Auditability
Traceability of why recommended, what changed, what prescribed.
90-Day Plan Timeline
Days 0-30: Safety & Quality Foundation
Map clinical pipeline, define risk taxonomy, build evaluation harness, ship MD workflow v1, establish audit trail.
Days 31-60: Sharpen & Speed Up
Upgrade to structured clinical plans, cut MD review time by 50%, close learning loop, start integrations.
Days 61-90: Scale Responsibly
Scale doctor-verified workflow, build operational reliability, own 2-3 clinical verticals, establish team and cadence.
Days 0-30
Safety + Quality Foundation
Strategic Goals
- Map the clinical pipeline. Document the complete flow from patient input to final MD decision. Identify every failure mode.
- Define safety boundaries. Establish clinical risk taxonomy, hard stops, and escalation rules. Know what we will never do.
- Build evaluation harness. Create automated testing for guideline adherence, contraindications, red flags, hallucinations, and clarity.
- Ship MD workflow v1. Efficient review UI that captures structured reasons for every edit.
- Establish audit trail. Immutable log for every encounter: inputs, outputs, checks, edits, final plan.
Key Deliverables
| Initiative | Owner | Due | Status | |
|---|---|---|---|---|
| Clinical Pipeline Map | CAIO + Clinical Lead | Day 7 | pending | |
| Clinical Risk Taxonomy | CAIO + Clinical Advisor | Day 14 | pending | |
| Hard Stops and Escalation Rules | CAIO + Legal/Compliance | Day 14 | pending | |
| Evaluation Harness in CI | ML Engineers | Day 21 | pending | |
| MD Workflow v1 | Product + Engineering | Day 28 | pending | |
| Audit Trail per Encounter | Engineering + Compliance | Day 30 | pending |
Risks & Mitigations
Days 31-60
Sharpen & Speed Up
Strategic Goals
- Upgrade to structured clinical plans. Move from chat responses to structured outputs: problem list, differential, tests, meds with dosing and contraindications.
- Cut MD review time by 50%. Better UX: summaries, highlights, templated edits, keyboard shortcuts.
- Close the learning loop. Turn MD edits into policy, prompt, and model improvements systematically.
- Start integrations. Staged approach to EHR, labs, and eRx. Start minimal, prove value, then scale.
Key Deliverables
| Initiative | Owner | Due | Status | |
|---|---|---|---|---|
| Structured Plan Output | ML Engineers + Clinical | Day 45 | pending | |
| Retrieval + Citations Loop | ML Engineers | Day 50 | pending | |
| 50% Review Time Reduction | Product + Engineering | Day 55 | pending | |
| Learning Loop Pipeline | ML Engineers + Clinical | Day 55 | pending | |
| Near-Miss Tracking System | Engineering + Clinical | Day 50 | pending | |
| Integration Milestones | Backend Engineering | Day 60 | pending |
Risks & Mitigations
System Architecture: The Learning Loop
Clinical Frontline
MD modifications captured as structured diffs (e.g., "Drug A -> Drug B").
Vector Store
Edits automatically clustered by semantic similarity (e.g., "Antibiotic Resistance").
Policy Council
Clinical lead reviews cluster. Updates system prompt: "Prioritize local resistance patterns."
Model Registry
New version deployed to 5% traffic. Evaluation harness confirms +4% accuracy.
Days 61-90
Scale Responsibly
Strategic Goals
- Scale doctor-verified workflow. Production-ready verification: AI chat to structured encounter to MD sign-off with audit packet.
- Build operational reliability. Monitoring, alerting, canary releases, automatic rollback triggers.
- Own 2-3 clinical verticals. Deep expertise in high-volume, guideline-rich areas. Measurable advantage over alternatives.
- Establish team and cadence. AI org plan, hiring, and operating rhythm for sustained clinical AI excellence.
Key Deliverables
| Initiative | Owner | Due | Status | |
|---|---|---|---|---|
| Scalable Verification Workflow | Product + Engineering | Day 75 | pending | |
| Production Monitoring Dashboard | Engineering + ML Ops | Day 70 | pending | |
| Canary Release System | ML Ops + Engineering | Day 75 | pending | |
| 2-3 Focused Verticals | CAIO + Clinical + Product | Day 85 | pending | |
| AI Org Plan | CAIO + CEO | Day 80 | pending | |
| Operating Rhythm Established | CAIO | Day 90 | pending |
Risks & Mitigations
Metrics & Safety Dashboard
Live PreviewSafety Event Rate
MD Overturn Rate
Avg Time to Review
Escalation Accuracy
Volume vs. MD Intervention Rate
Review Time Distribution
Workflow Funnel (Daily)
Model Release History
Workflow Visualization
Click each step to explore the workflow
System Logic: Step 1
Ingest unstructured audio/text. Vectorize historical context. Retrieve relevant guidelines via RAG.
Recent "Near Miss" Events
Model suggested Amoxicillin for patient with Penicillin allergy listed in structured EHR field but not note.
Resolution: Context window expanded to include structured allergy list.
Screening mammogram recommended at age 35 without family history.
Resolution: RAG pipeline updated with USPSTF 2026 guidelines.
Hallucinated "non-smoker" status when field was empty.
Resolution: Added "unknown" token for missing fields.
Plan generated before lab results returned.
Resolution: Added state check for pending results.
Metformin 1000mg BID recommended without noting patient eGFR of 35.
Resolution: Mandatory eGFR check for metformin added.
Model cited non-existent guideline "AHA 2025 Headache Protocol".
Resolution: Citation verification layer added before output.
Team & Cadence
AI Team Hiring Plan
| Role | Count | Timing | Priority |
|---|---|---|---|
| Chief AI Officer | 1 | Day 1 | Critical |
| ML Engineer | 1 | Days 1-30 | Critical |
| Software Engineer | 1 | Days 1-30 | Critical |
| Clinical Advisor (Part-time MD) | 1 | Days 1-30 | Critical |
Operating Rhythm
Daily Clinical Review
Review flagged encounters, near-misses from past 24h
Weekly Safety Standup
Review safety metrics, near-miss trends, model performance
Bi-weekly Model Review
Evaluate model updates, plan releases, review learning loop
Monthly Trust Report
Comprehensive safety/quality report for leadership and board
Quarterly Clinical Board
Strategic review of clinical AI direction, major decisions
Reporting Structure
What Could Go Wrong & Contingencies
| Scenario | Contingency Plan |
|---|---|
| MD adoption is slower than expected | Pivot to async review model. Reduce friction with mobile-first interface. Offer incentives for early adopters. |
| Model accuracy plateaus below target | Narrow scope to highest-confidence conditions. Invest in domain-specific fine-tuning. Partner with academic medical center for data. |
| Regulatory pushback on AI-assisted care | Position as "clinical decision support" not "diagnosis." Proactive engagement with state medical boards. Robust audit trail as compliance asset. |
| Key hire falls through | Maintain relationships with 2-3 backup candidates. Leverage fractional/consulting talent for interim coverage. Adjust timeline if needed. |
Competitive Context
The clinical AI space is crowded but undifferentiated. Most competitors focus on either pure AI (no human oversight) or pure telehealth (no AI leverage). RealMDai's "doctor-verified AI" positioning is a genuine wedge.
Pure AI Players
Fast but risky. Regulatory exposure. Trust deficit with patients and providers.
Traditional Telehealth
Slow and expensive. MD bottleneck limits scale. No AI leverage on efficiency.
RealMDai (Our Position)
AI speed + MD trust + audit trail. Scalable and defensible. Regulatory-ready.