Quick Answer
Shoppeal Tech has built HIPAA-compliant AI products for 3 US healthcare clients from our offshore team in India. The 3 non-negotiable HIPAA requirements for LLM applications: (1) a signed Business Associate Agreement (BAA) with every AI vendor that touches PHI OpenAI, Anthropic, Google, and AWS all offer BAAs; (2) PHI must be de-identified before reaching the LLM or the LLM must run in a HIPAA-compliant environment; (3) every access to PHI must be logged with audit controls. Teams that miss the BAA requirement are technically in violation regardless of how secure their code is.
BAA #1 cause
HIPAA Violations (AI)
$1.9M/year
Max HIPAA Penalty
All AI APIs
BAA Required Vendors
8–10 weeks
Build Timeline
What HIPAA Actually Requires for LLM Applications
HIPAA's Security Rule applies to electronic Protected Health Information (ePHI). For AI applications, ePHI includes: patient names, dates (birth, admission, discharge), geographic data smaller than state, medical record numbers, health plan beneficiary numbers, and any data element that could identify an individual in a healthcare context.
The 4 HIPAA requirements most LLM teams miss:
1. Business Associate Agreements with AI vendors: If your LLM sees PHI, your AI provider is a Business Associate under HIPAA. You must have a signed BAA. Without it, you are in violation the moment PHI touches the API regardless of technical security.
2. Minimum necessary standard: Send only the PHI fields your AI needs for the specific task. If your AI is summarising discharge notes, don't send the full patient record.
3. Access controls for AI outputs: If your AI generates a summary containing PHI, that output has the same access controls as the source PHI role-based access, audit logging, encryption at rest.
4. Breach notification: If PHI is inadvertently sent to an AI system without a BAA, it may constitute a breach requiring notification under HIPAA's Breach Notification Rule.
The HIPAA-Compliant LLM Architecture
Option 1: De-identify before LLM (preferred for most use cases): PHI → PHI de-identification layer (NER + rule-based) → de-identified text → LLM → response → PHI re-identification layer (if needed) → output.
This eliminates the need for a BAA with your LLM provider for the inference step. De-identification must satisfy HIPAA's Expert Determination or Safe Harbor method.
Option 2: HIPAA-compliant cloud LLM: AWS Bedrock with BAA, Azure OpenAI with BAA, or Google Vertex AI with BAA all operate within HIPAA-eligible environments. PHI can be sent to these services under an executed BAA.
Option 3: On-premise/VPC LLM: Self-hosted Llama or Mistral model within your HIPAA-compliant AWS VPC. PHI never leaves your environment. Highest security, highest infrastructure cost.
Shoppeal Tech's standard build uses Option 1 for new clients it's the fastest to implement and requires no LLM vendor negotiation.
Frequently Asked Questions
Does OpenAI offer a HIPAA BAA?
Can Indian AI companies building for US healthcare clients be HIPAA-covered?
Explore More
Free AI Audit
30 minutes with the Shoppeal Tech team to review your AI stack and build a 90-day roadmap.
Book Free AuditRelated Service
Healthtech AI
Shoppeal Tech engineers deliver this end-to-end for enterprise teams.
View ServiceBoundrixAI
The AI governance gateway: prompt injection protection, PII redaction, audit logging, and SOC2/DPDP compliance in one platform.
Request DemoMore AI Guides
Explore 15+ deep guides on AI governance, RAG, AEO/GEO, and offshore AI delivery.
Browse All Guides