AI Governance Policy

Last Updated: May 7, 2026 Version: 1.0 Next Review: August 7, 2026

Core Commitment: We use AI to amplify human expertise. Every AI-generated output undergoes mandatory human review before reaching clients, regulators, or the public.

The Medvi Case Study (2024-2026)

AI-powered telehealth company built a $401 million business with just 2 employees using ChatGPT, Claude, Grok, Midjourney, Runway, and ElevenLabs. Result: FDA Warning Letter (Feb 2026, #721455) and FTC investigation due to:

The "Medvi Test" — Mandatory 5 Questions

# Question If YES →
1 Could this be mistaken for a real person's work? Add attribution: "AI-generated, human-reviewed"
2 Could this affect a customer's financial or legal decision? Mandatory human expert review
3 Could this be used as evidence in court or regulatory review? Primary source documentation required; AI only for formatting
4 Would we be comfortable if a journalist exposed this as AI-generated? Don't use AI
5 Does this require a disclaimer about accuracy? Don't use AI — use human expert + Information Weight system

Tiered AI Usage Framework

Tier Use Case Human Checkpoint Examples
Tier 1: Production Client-facing, regulatory, financial Mandatory expert review Risk scores, compliance reports, regulatory submissions, pricing models
Tier 2: Internal Internal productivity, research Manager review Meeting notes, email drafts, research aggregation, presentation drafts
Tier 3: Prohibited Fake personas, testimonials, regulatory claims NEVER AI-generated experts, fake reviews, unverified claims, false testimonials

Medvi's Failures vs. Our Safeguards

Medvi Misuse Consequence Our Alternative
AI fake doctors (Midjourney + GPT) Consumer deception, FTC Act violation Real Experts Only — verifiable credentials
AI face-swapped before/after photos Deceptive advertising, FDA misbranding 3-Source Validation for every significant claim
Unchecked AI customer service Hallucinated prices, false claims, class action Human-in-the-Loop — AI drafts, humans approve
AI content with accuracy disclaimer No accountability, brand destruction Information Weight System (A-E grading, no disclaimers)
5,000+ unmonitored fake doctor ads FTC investigation, affiliate shutdown Partner KYC/KYB + real-time monitoring
No compliance review layer FDA Warning Letter 721455 Compliance-by-Design from day one

Key Systems in Place

Implementation Roadmap

Phase Timeline Actions
Foundation Months 1-3 Define policy; establish human checkpoints; role-based AI access controls; create "AI Usage Log"
Operational Months 4-6 Deploy tiered framework; train staff on Medvi Test; dedicated compliance officer; automated monitoring for client-facing content
Scale Months 7-12 AI-assisted research (human fact-check mandatory); AI-powered test automation (human QA mandatory); AI-generated internal analytics (human interpretation mandatory); quarterly audits

The Golden Rule

"If Medvi did it with AI, we do it with human oversight.
If Medvi didn't review it, we triple-review it."

Medvi vs. Truth Meter + Sentinel Nexus

Dimension Medvi Truth Meter + Sentinel Nexus
Speed to Launch 2 months (no compliance) 6 months (compliance-first)
AI Usage AI replaces humans AI assists humans
Content Quality "AI slop", hallucinations Information Weight A-E grading
Compliance Afterthought → FDA warning Day-one architecture
Expert Claims Fake AI doctors Real verified experts only
Accuracy Disclaimed (no accountability) Guaranteed with audit trail
Oversight 2 employees, no compliance Dedicated compliance team
Result $401M → regulatory disaster Sustainable, regulator-approved

Accountability & Contact

Role Contact
Compliance Officer eddy@highperformanceadvisory.com
Technical Lead dzmitry@arli.ai