Engineering · Healthcare · Compliance
What HIPAA actually asks of an engineer
Three years building patient software for psychiatric care in eight US states taught me the difference between performative HIPAA compliance and the kind that actually keeps PHI safe.
6 May 2023·8 min read
I joined Mindful Care in 2018 to build the patient platform (booking, clinical notes, telehealth, billing, integrated EHR) for a psychiatric urgent care company that grew from one state to eight while I was there. Mental health is one of the most sensitive PHI categories the HIPAA framework recognizes. By the time I left we were running HIPAA-compliant infrastructure across four third-party integrations and a half-dozen environments.
Most engineers I meet think HIPAA is a checklist of encryption-in-transit and signed Business Associate Agreements. That's the easy 30%. The other 70%, the part that actually protects patients, is a set of engineering disciplines that don't show up in the documentation. These are the ones that mattered.
1. Minimum necessary access
HIPAA's “minimum necessary” rule says any person or system that touches PHI should see exactly the slice they need to do their job. Not the whole patient record. Not the whole visit. Just the slice.
In practice this is a database access pattern, not a policy. Every query that touches a patient table should be scoped by relationship: this user is the treating clinician for this patient, or this admin role has been granted aggregate-only access for billing reconciliation. Joins that pull “all patients” in a single query should fail by default; specific endpoints opt in.
We did this with a Phoenix-level access policy that wrapped every Ecto query against a PHI table. The policy took the actor (user plus role plus clinic), the resource (patient), and the operation (read vs export vs delete) and either narrowed the query or raised. It made certain dashboards slower to write but every PR that touched PHI got reviewed for what scope it claimed. Engineers stopped thinking about access at the controller level and started thinking about it at the row level.
2. Audit logging is the actual product
HIPAA requires a complete audit trail of every PHI access: reads, writes, exports. Six years of retention. Tamper-evident. This isn't a nice-to-have; it's the document that, if you have a breach, decides whether you go to court or settle.
The trap is engineering it as logging. If you implement it as Logger calls scattered around the codebase, you'll miss reads. You will. There's always a CSV export some engineer wrote for an ops person that bypasses your audit middleware.
Engineer it as a database trigger or a write-through wrapper at the repository layer. Every SELECT that touches a PHI table writes a row to an append-only audit table: actor, resource ID, operation, query fingerprint, IP. The audit table is on a separate Postgres role that has no DELETE grant for anyone except a quarterly retention worker. If you can't delete rows, you can't tamper.
We caught two near-incidents this way. Both were engineers running ad-hoc queries against prod for “just one debugging thing.” Both showed up in the audit table, both led to a process change, neither became a breach. That's the value.
3. Encryption at rest is easy. Key management is the hard part.
Postgres TDE, S3 server-side encryption, encrypted EBS volumes: every cloud provider gives you these in two clicks. None of them meaningfully protect PHI on their own, because the encryption key is held by the same provider that holds the data. If the provider's access is compromised, so is the key.
For PHI columns specifically (social security numbers, diagnoses) we used application-layer encryption with keys held in AWS KMS under tight IAM policies. The actual cleartext was only ever decrypted in memory inside a request that had passed access control. KMS audit logs gave us an independent view of who decrypted what, when. If our database was somehow exfiltrated, the attacker would walk away with ciphertext.
The hard part isn't encryption. It's making sure the keys get rotated, that revocation works during off-boarding, and that no decryption call escapes audit. That's an engineering problem, not a cloud-config problem.
4. Integrations are PHI extension cords
Mindful Care integrated with DrChrono (EHR), Active Campaign (CRM), Daily.co (telehealth), and Stripe (billing). Every integration is a new doorway for PHI to leave your perimeter. Every doorway needs a BAA, a data scope agreement, and an engineering control that enforces what the BAA says.
The discipline I'd push the hardest: never assume the third-party API will scope itself. If the EHR exposes “search patients by name,” your code should never use it. Every external call from your service should look like get_patient(patient_id) where patient_idalready passed your access policy. The third-party then can't reveal more than your policy already decided to allow, even if it tried.
For Active Campaign, this meant only sending tagged events keyed by opaque IDs. Never names, never diagnoses, never appointment details. The CRM saw “patient_internal_id_1234 booked appointment_type_2.” The mapping back to a real patient lived only in our Postgres. That same discipline saved us when one of our integrations announced a security incident: we knew exactly what had been exposed and what hadn't, because what we sent over was deliberately impoverished.
5. The breach process is the real product
HIPAA's breach notification rule requires you to notify affected patients within 60 days of discovering a breach. The number you'll spend most time arguing about with your compliance officer is the number of patients affected. Audit logs make that number concrete: we know exactly which records this attacker queried.
The thing nobody tells you is that the audit log isn't just for the regulator. It's how you avoid notifying patients who weren't actually affected. Without an audit log, the conservative answer is “all of them,” because you can't prove otherwise. With a clean audit log, the answer is “the 47 records this attacker actually looked at.” That difference is hundreds of thousands of dollars in notification costs and brand damage.
The takeaway
HIPAA isn't a regulatory burden you tolerate. It's a forcing function for engineering disciplines that good systems should have anyway: scoped queries, append-only auditability, KMS-managed keys, deliberate integration boundaries, breach playbooks. If you do them because HIPAA makes you, you'll find yourself accidentally building better software.
I left Mindful Care for SafeBoda and InstaEscrow, neither of which is regulated under HIPAA. But the access policies, the audit triggers, the application-layer encryption: all of those came with me. African fintech doesn't have HIPAA, but it has its own equivalents (the Kenya Data Protection Act, the Nigeria Data Protection Regulation, PCI-DSS for payments) and the engineering disciplines transfer cleanly. Compliance is mostly a forcing function for engineering you wanted to do anyway.