Compliance · 11 min read
AI Compliance in Healthcare and Finance: What You Need to Know
A comprehensive guide to navigating HIPAA, SOC 2, PCI DSS, the EU AI Act, and other regulatory requirements when deploying AI in regulated industries, with specific cloud configurations and checklists.
AI Compliance in Healthcare and Finance: What You Need to Know
Deploying AI in healthcare and finance requires navigating complex regulatory landscapes. The penalties for non-compliance are severe - HIPAA violations can cost up to $1.5 million per incident category per year, and PCI DSS non-compliance can result in fines of $5,000-$100,000 per month. Beyond penalties, a compliance failure erodes the trust that these industries are built on.
At Obaro Labs, compliance is not an afterthought - it is a design constraint that shapes our architecture from day one. This guide covers what you need to know to build compliant AI systems in regulated industries.
Healthcare: HIPAA Compliance for AI
HIPAA's requirements for AI systems fall into three categories: protecting PHI, maintaining audit trails, and managing business associate relationships.
Protecting PHI in AI Workflows
Every point in your AI pipeline where PHI is processed, stored, or transmitted must be secured:
- Data at rest: PHI must be encrypted using AES-256 or equivalent. This includes databases, file storage, vector stores, caches, and backups.
- Data in transit: All PHI transmission must use TLS 1.2 or higher. No exceptions - this includes internal service-to-service communication, not just external-facing endpoints.
- Data in processing: When PHI is being processed by an LLM, the LLM provider must have a BAA in place. The data must be transmitted securely and must not be retained by the provider for training purposes.
- Data in logs: This is the most commonly overlooked vector. Error logs, debug logs, and application logs frequently contain PHI inadvertently. All logging systems that could receive PHI need to be encrypted and access-controlled.
Business Associate Agreement (BAA) Checklist
For every AI system handling PHI, verify the following BAA chain:
- Cloud provider BAA signed (AWS, GCP, or Azure)
- LLM provider BAA signed (OpenAI API, Anthropic API, or Azure OpenAI)
- Vector database provider BAA signed (if using managed service like Pinecone)
- Monitoring/logging provider BAA signed (or self-hosted)
- Email/notification service BAA signed (if sending PHI via notifications)
- Backup service BAA signed (if using third-party backup)
- CDN provider BAA signed (if PHI passes through CDN)
- All BAAs reviewed for AI-specific clauses (data retention, model training exclusions)
Critical note on LLM providers: As of early 2026, the BAA landscape for LLM providers is:
- OpenAI: Offers BAA for API usage. Data is not used for training when using the API.
- Anthropic: Offers BAA for Claude API. Similar data handling policies.
- Azure OpenAI: Covered under Microsoft's Azure BAA. Often preferred by healthcare organizations because of the existing Microsoft relationship.
- Google Vertex AI: Covered under Google Cloud's BAA.
- Self-hosted models: No BAA needed for the model provider, but your hosting infrastructure must be HIPAA-compliant.
AWS Configuration for HIPAA-Compliant AI
Here are the specific AWS configurations we implement for healthcare AI systems:
VPC and Networking:
- Create a dedicated VPC for PHI-processing workloads
- Use private subnets for all services that handle PHI
- Configure VPC Flow Logs for network monitoring
- Use VPC endpoints for AWS services (S3, DynamoDB, SQS) to keep traffic off the public internet
- Configure Network ACLs and Security Groups following least-privilege
Compute:
- Use ECS Fargate or EKS with encryption enabled
- Enable AWS CloudTrail for all API activity logging
- Configure instance metadata service v2 (IMDSv2) to prevent SSRF attacks
Storage:
- Enable S3 default encryption (SSE-KMS recommended over SSE-S3)
- Enable S3 access logging
- Configure S3 bucket policies to block public access
- Enable RDS encryption at rest and enforce SSL connections
- Enable DynamoDB encryption at rest
Key Management:
- Use AWS KMS with customer-managed keys for PHI encryption
- Configure automatic key rotation (annual minimum)
- Restrict key access through IAM policies
Monitoring and Alerting:
- Enable GuardDuty for threat detection
- Enable AWS Config for continuous compliance monitoring
- Configure CloudWatch alarms for suspicious activity
- Enable AWS Security Hub for compliance dashboards
GCP Configuration for HIPAA-Compliant AI
Organization Policies:
- Enable organization policy constraints to prevent public access
- Restrict resource locations to approved regions
- Enable Access Transparency for audit logging
Compute:
- Use GKE with Workload Identity for secure service authentication
- Enable Binary Authorization to ensure only trusted containers run
- Configure Cloud Audit Logs for all admin and data access activity
Storage:
- Enable default encryption with Cloud KMS (customer-managed keys)
- Configure Cloud Storage bucket-level access control
- Enable Object Versioning for data integrity
- Use VPC Service Controls to create security perimeters
Finance: SOC 2 and PCI DSS for AI
Financial services AI must comply with SOC 2 for security controls and PCI DSS if the system processes, stores, or transmits payment card data.
SOC 2 Type II for AI Systems
SOC 2 evaluates controls across five trust service criteria. For AI systems, the key requirements are:
Security (Common Criteria):
- Access controls for all AI system components
- Encryption of data at rest and in transit
- Network security (firewalls, segmentation)
- Change management for model updates and prompt changes
- Vulnerability management and penetration testing
Availability:
- SLA definitions for AI system uptime
- Disaster recovery and business continuity plans
- Capacity planning for variable AI workloads
- Monitoring and alerting for system health
Processing Integrity:
- Validation that AI outputs are accurate and complete
- Error handling that prevents corrupted or partial outputs
- Quality monitoring with defined thresholds
- Audit trails for all AI processing
Confidentiality:
- Data classification for AI training data and outputs
- Access controls based on data classification
- Data retention and disposal policies
- Encryption requirements based on classification
Privacy:
- PII handling procedures for AI inputs and outputs
- Consent management for data used in AI processing
- Data subject rights (access, correction, deletion) applied to AI data
- Privacy impact assessments for new AI use cases
PCI DSS for AI Processing Payment Data
If your AI system processes, stores, or transmits cardholder data, PCI DSS applies. This is less common than HIPAA in our experience, but it arises in fraud detection, payment processing automation, and customer service AI that accesses account information.
Key PCI DSS requirements for AI:
- Cardholder data must be encrypted with AES-256 and access restricted by business need
- Cardholder data must never be included in AI training data
- AI system components handling cardholder data must be within the PCI DSS scope (segmented from out-of-scope systems)
- All access to cardholder data by the AI system must be logged and monitored
- Quarterly vulnerability scans and annual penetration testing must cover AI components
Model Explainability Requirements
Both healthcare and financial regulators are increasingly requiring model explainability - the ability to explain why an AI system made a particular decision.
Healthcare: The FDA's guidance on AI-based clinical decision support requires that healthcare providers can understand the basis for AI recommendations. This means logging the inputs, the retrieved context (for RAG systems), and the reasoning behind the output.
Finance: FINRA and the SEC expect that AI-driven trading, lending, and compliance decisions can be explained to regulators upon request. The EU's AI Act (discussed below) adds additional explainability requirements for high-risk AI systems.
How we implement explainability:
- Log all inputs, context, and outputs for every AI interaction
- For RAG systems, log the retrieved documents and their similarity scores
- For agent systems, log each reasoning step and tool call
- Build audit interfaces that allow compliance teams to review any AI decision
- Generate human-readable explanations for critical decisions
Incident Response Template for AI Systems
When an AI compliance incident occurs, you need a structured response. Here is the template we use:
Phase 1: Detection and Assessment (0-4 hours)
- Identify the scope of the incident (which data, which users, which systems)
- Classify the severity (low/medium/high/critical)
- Determine if PHI or cardholder data was exposed
- Activate the incident response team
Phase 2: Containment (4-24 hours)
- Isolate affected systems if necessary
- Disable the AI feature if it is causing ongoing exposure
- Preserve evidence (logs, data snapshots)
- Implement temporary mitigations
Phase 3: Notification (24-72 hours)
- For HIPAA breaches affecting 500+ individuals: notify HHS within 60 days (but best practice is within 72 hours)
- For HIPAA breaches affecting fewer than 500 individuals: notify HHS annually
- Notify affected individuals without unreasonable delay
- Notify state attorneys general as required by state law
- For PCI DSS: notify the payment card brands through your acquiring bank
Phase 4: Remediation (1-4 weeks)
- Fix the root cause
- Validate the fix through testing
- Update security controls and monitoring
- Document lessons learned
- Update incident response plan based on findings
EU AI Act Implications
The EU AI Act, which began phased enforcement in 2025, introduces significant new requirements for AI systems deployed in or serving EU users. Even US-based companies must comply if they serve EU customers.
Key requirements for healthcare and financial AI:
High-Risk Classification: AI systems used for medical diagnosis, treatment recommendations, creditworthiness assessment, and fraud detection are classified as "high-risk" under the EU AI Act. High-risk systems must comply with stringent requirements:
- Risk management system: A continuous risk identification, analysis, and mitigation process throughout the AI system lifecycle
- Data governance: Training data must be relevant, representative, and free from bias. Data provenance must be documented.
- Technical documentation: Comprehensive documentation of the system design, development process, testing methodology, and performance metrics
- Record-keeping: Automatic logging of events throughout the system lifecycle for traceability
- Transparency: Users must be informed that they are interacting with an AI system. For high-risk systems, sufficient information must be provided for users to interpret outputs.
- Human oversight: High-risk AI systems must be designed to allow effective human oversight, including the ability to override AI decisions.
- Accuracy and robustness: Systems must achieve appropriate levels of accuracy and resilience to errors.
Penalties: Non-compliance with the EU AI Act can result in fines up to 35 million euros or 7% of global annual revenue, whichever is higher.
Our recommendation: Even if you do not currently serve EU customers, build your AI systems to EU AI Act standards. These requirements represent best practices, and similar regulations are being developed in the US, UK, Canada, and other jurisdictions. Building to the highest standard now is cheaper than retrofitting later.
Best Practices for Compliant AI Architecture
Based on our experience across more than forty regulated AI deployments:
-
Privacy by design: Build compliance into architecture from day one. Do not bolt it on later. This means making architectural decisions (data flows, encryption, access control) with compliance requirements as primary constraints.
-
Model documentation: Document everything about your AI system - training data sources, model selection rationale, evaluation methodology, known limitations, and ongoing monitoring approach. This documentation serves both internal governance and regulatory review.
-
Regular audits: Schedule security and compliance audits quarterly for high-risk systems. Use automated compliance scanning (AWS Config, GCP Security Command Center) for continuous monitoring, supplemented by manual reviews.
-
Incident response readiness: Have a documented, tested incident response plan specific to AI incidents. Conduct tabletop exercises at least annually. The worst time to figure out your response process is during an actual incident.
-
Regulatory monitoring: Assign someone to track regulatory developments in AI. The landscape is evolving rapidly - new guidance from HHS, FDA, FINRA, SEC, and the EU AI Act implementation bodies arrives regularly.
Conclusion
Compliance is not optional - it is a competitive advantage. Organizations that build compliant AI from the start move faster than those who retrofit it later. They also build deeper trust with customers and regulators, creating a foundation for long-term success in regulated markets.
At Obaro Labs, every project in healthcare and finance begins with a compliance architecture review. If you are building AI for a regulated industry and want to ensure you are doing it right from the start, we would welcome the conversation.