Monetizing the Untouchable
Health publishers can’t run ads. The FTC fined GoodRx $1.5 million, fined BetterHelp $7.8 million, and banned Cerebral from using health data for advertising entirely. Any publisher sitting on health-adjacent conversations treats that inventory as untouchable. The compliance risk is existential. The revenue stays at zero.

The conversation never leaves the publisher. The embedding derived from it enters a sealed enclave, encrypted in transit, where no party can read it. The FTC’s enforcement framework, as written and as applied in every case to date, has nothing to trigger on. And the signal is better than anything keywords can produce: a user who describes lateral knee pain on long runs with a race eight weeks out gets matched to a sports rehab PT who specializes in keeping athletes in training. Google would have matched “knee pain running” to the highest bidder.
Why Acquisition Is the Line
Every FTC health data enforcement action required the same thing: a third party acquired identifiable health information.
GoodRx sent prescription data to Facebook via tracking pixels. Facebook acquired it. BetterHelp sent mental health intake responses to Snapchat and Criteo. Cerebral sent diagnosis information to LinkedIn and TikTok. In every case, the ad platform received identifiable health data in cleartext. The Health Breach Notification Rule (16 CFR 318) is explicit: “acquisition of PHR identifiable health information without the authorization of the individual.” No acquisition, no breach.
The FTC’s consent decrees confirm this. Every one focuses on transfer: “respondent disclosed [health information] to [third party].” Every remedy prohibits future sharing. None prohibits future use of internally-held data. The line is acquisition.
How No One Acquires It
The publisher keeps the conversation. When a user describes lateral knee pain on long runs, the publisher extracts the intent and computes an embedding on its own infrastructure. The raw conversation never leaves.
Advertiser embeddings are public. An advertiser’s position in embedding space is a claim of expertise: “sports injury rehab for competitive athletes who need to keep training.” The exchange publishes the full catalog, and the publisher caches a local copy. Phase one of the two-phase model runs entirely on the publisher’s servers: cosine distance between the conversation embedding and cached advertiser positions, rendered as a proximity indicator. No data moves. No auction runs.
If the user taps, the embedding enters a TEE enclave running on AWS Nitro Enclaves. The publisher encrypts the embedding with the enclave’s public key, verified through remote attestation. The exchange operator’s infrastructure routes the ciphertext but cannot decrypt it. Inside the enclave, the TEE computes distances against all advertiser positions, combines them with each advertiser’s dynamic bids, budgets, and pacing rules, runs the full auction with log(bid) - distance² / σ², and returns a winner ID and price. The embedding is destroyed after execution.
Publisher: keeps the raw conversation. Sends only an encrypted embedding that the exchange operator cannot read.
Exchange operator: deploys the enclave, cannot inspect its execution. AWS Nitro strips hypervisor access to enclave memory.
Advertiser: submits bidding rules to the TEE. Learns they won and what they pay. Nothing about the user.
Chatbot: doesn’t know advertising exists. Separate system, one-directional data flow.
Why This Is Different from Data Clean Rooms
The FTC published a blog post on data clean rooms in November 2024. The message was blunt: DCRs “can be used to obfuscate privacy harms,” and the technology “isn’t inherently protective.” A health publisher’s compliance team would read this and stop.
But DCRs fail because the data still moves. The operator receives it, promises not to look, and sometimes looks anyway. GoodRx had confidentiality provisions with Facebook. The FTC was unimpressed.
Intent casting is different in kind. The conversation stays with the publisher. The embedding enters sealed hardware, encrypted in transit, unreadable to the exchange operator. The HBNR covers “unsecured” PHR identifiable health information, defined as data not “rendered unusable, unreadable, or indecipherable” through specified technologies. An embedding encrypted in transit and processed inside hardware the operator cannot access meets that standard. The FTC has never tested this safe harbor in an enforcement context involving confidential computing. But the statutory language supports it.
State health privacy laws like Washington’s My Health My Data Act regulate collection, sharing, and sale of health data. The exchange operator receives ciphertext it cannot read, processed inside hardware it cannot inspect, destroyed after execution. If routing encrypted data to a TEE constitutes “collection,” then every HTTPS request to a cloud-hosted health app is collection by the CDN. That standard would break the internet.
Google serves pharmaceutical ads against “depression symptoms” and PT ads against “knee pain running” every day. The FTC has never pursued this, because matching a relevant ad to a voluntarily submitted query is search working correctly. Intent casting does the same thing with stronger privacy: the conversation stays with the publisher, the embedding is encrypted in transit, and nobody retains it afterward.
Consent That Holds Up
The FTC’s updated Health Breach Notification Rule requires consent that is affirmative, clear, and standalone. Cookie banners fail this. They are designed to be clicked through. Dark patterns make rejection harder than acceptance. Regulators on both sides of the Atlantic are cracking down, which tells you what they think of checkbox consent.
The two-phase model satisfies the standard by construction. Phase one shows a proximity indicator, computed locally against cached advertiser embeddings on the publisher’s own infrastructure. No data leaves the publisher. No auction runs. Phase two requires the user to tap. Then the encrypted embedding enters the TEE and the auction fires. The default state is no ad. The user who never taps never sends any data beyond the publisher’s servers. Consent is expressed through action.
Proving You Did What You Said
GoodRx said it protected user privacy. Its tracking pixels said otherwise. Every FTC health data case has the same structure: a stated commitment to privacy, and a technical implementation that contradicted it.
The trust chain makes this gap impossible. Open-weight embedding models with published hashes, so anyone can reproduce the embedding. Published auction source code running inside an attested TEE, so anyone can verify the auction ran honestly. The publisher’s matching is reproducible: same model, same advertiser catalog, same distances. The claim and the implementation are the same artifact.
Compare this to the status quo. Google’s Quality Score is proprietary; no advertiser or regulator can verify how it works. OpenAI’s “Answer Independence Principle” is a policy claim with no attestation.
The Revenue Case
Health-adjacent inventory is some of the highest-intent traffic on the internet. A user describing symptoms to a chatbot is further down the funnel than someone typing two words into a search box. That intent is currently unmonetized because the compliance risk outweighs any possible ad revenue.
Intent casting resolves the tradeoff. The scoring function rewards proximity between the user’s need and the advertiser’s expertise. A sports rehab PT who positions accurately wins the runner’s query at a price that reflects genuine relevance. The publisher earns revenue on inventory that was previously dead, and the advertiser reaches a user they couldn’t reach through keywords, because the signal is richer than anything a two-word search query could carry.
The compliance constraint and the revenue incentive point the same direction. Better privacy architecture produces better matching, which produces better CPMs, which makes the privacy architecture worth building.
Where This Stands
Google retains health-intent search queries indefinitely and matches them against pharmaceutical advertisers. The data doesn’t leave the building because Google is the building. Meta proposed using chatbot conversations for ad targeting with no opt-in. Incumbents comply by being the only party in the room.
Intent casting complies differently. The publisher keeps the conversation. The embedding enters sealed hardware, encrypted in transit, destroyed after execution. No party acquires readable health information. That’s the whole argument. Not “trust us.” We prove it.
Written with Claude Opus 4.6 via Claude Code. I directed the argument and framing; Claude researched FTC enforcement history and drafted prose.
Part of the Vector Space series. june@june.kim