ChatGPT is not HIPAA compliant by default. The free, Plus, and Pro versions all use your conversations to train OpenAI’s models unless you manually opt out, and none of these tiers support a Business Associate Agreement (the legal contract HIPAA requires before any vendor can handle protected health information). However, OpenAI does offer specific products and pathways that can support HIPAA-compliant use: ChatGPT Enterprise, ChatGPT for Healthcare, and the API platform.
Why Standard ChatGPT Fails HIPAA Requirements
HIPAA requires any third-party vendor that touches protected health information (PHI) to sign a Business Associate Agreement, or BAA. This contract makes the vendor legally responsible for safeguarding patient data. OpenAI does not offer a BAA for ChatGPT Free, Plus, or Pro plans. Without one, typing patient names, diagnoses, treatment details, or any other identifiable health information into ChatGPT violates HIPAA, period.
There’s also the training issue. Free, Plus, and Pro accounts default to sharing your conversations with OpenAI for model improvement. That means anything you type could be fed into the training pipeline. You can turn this off by going to Settings, then Data Controls, and toggling off “Improve the model for everyone.” That setting syncs across all your devices. But even with training disabled, these tiers still lack the security infrastructure and legal agreements HIPAA demands.
Which OpenAI Products Support HIPAA Compliance
OpenAI currently offers three pathways for organizations that need to handle PHI:
- ChatGPT for Healthcare: An enterprise-tier product built specifically for clinicians, administrators, and researchers. It includes a BAA, no training on your data, data retention controls, role-based access, data residency options, and audit logs.
- ChatGPT Enterprise: The broader business product that shares the same enterprise-grade security controls. It also never uses conversations for model training by default.
- The OpenAI API: Available to developers building their own health applications. You don’t need a full enterprise agreement to request a BAA for API use. OpenAI reviews requests on a case-by-case basis. Most API services are covered under the BAA, with some exceptions.
To request a BAA for the API, you email [email protected] with details about your company and use case. OpenAI may ask follow-up questions before approving.
Security Certifications Behind the Scenes
For the enterprise and API tiers, OpenAI has built up a substantial set of security certifications. The company holds SOC 2 Type 2 compliance, which means an independent auditor has verified its controls for security, availability, confidentiality, and privacy. It also maintains ISO 27001, 27017, 27018, and 27701 certifications covering information security and privacy management for the API, ChatGPT Enterprise, and ChatGPT Edu services.
All data is encrypted both in transit (while moving between your device and OpenAI’s servers) and at rest (while stored). The infrastructure runs on established cloud providers following industry-standard access controls and change management. These certifications don’t automatically make a product HIPAA compliant, but they form the technical foundation that HIPAA’s Security Rule requires: encryption, access controls, and audit capabilities.
A BAA Alone Doesn’t Make You Compliant
Signing a BAA with OpenAI is necessary but not sufficient. HIPAA compliance is a shared responsibility between you and your vendor. The HIPAA Security Rule requires three categories of safeguards that your organization must maintain on its own: physical safeguards like restricting access to workstations, technical safeguards like login credentials and encryption, and administrative safeguards like written policies, procedures, and staff training.
In practice, this means your organization still needs to control who can access ChatGPT, what types of PHI are permitted in prompts, how outputs are stored, and how staff are trained on appropriate use. Role-based access controls (available in Enterprise and Healthcare tiers) help here by letting administrators restrict which team members can use certain features. Custom data retention settings let you control how long OpenAI holds your data.
The “minimum necessary” standard also applies. Even with a compliant setup, staff should only input the minimum amount of patient information needed to accomplish the task. If you’re using ChatGPT to draft a patient letter, for instance, you might use a medical record number rather than a full name and date of birth.
What HHS Says About AI in Healthcare
The Department of Health and Human Services actively encourages AI use for automating administrative tasks in healthcare, as long as it stays within HIPAA protections for PHI. The agency’s AI strategy specifically calls out AI assistants and conversational tools as promising, while emphasizing that they need appropriate safeguards and should clearly distinguish educational content from clinical guidance.
HHS has also signaled that “high-impact” AI applications, those that could significantly affect health outcomes, individual rights, or public trust, will face especially strict oversight. For healthcare organizations, this means using ChatGPT to summarize notes or draft letters sits in a different risk category than using it to support clinical decision-making. The more consequential the use case, the more scrutiny it will attract.
The Bottom Line for Healthcare Organizations
If you’re a solo practitioner using ChatGPT Plus to help with patient notes, you’re almost certainly violating HIPAA. The consumer-facing versions of ChatGPT were not designed for regulated healthcare use, and no amount of toggling privacy settings changes the fact that there’s no BAA in place.
If your organization needs to use ChatGPT with any patient data, you need either ChatGPT for Healthcare, ChatGPT Enterprise, or a BAA-covered API integration. Even then, the technology is only one piece. Your internal policies, access controls, staff training, and data handling procedures all have to meet HIPAA’s requirements independently. The safest approach is to work with your compliance team to evaluate the specific use case, request the appropriate BAA, and build organizational safeguards before any PHI touches an OpenAI product.

