Why Your Therapy App Privacy Matters More Than You Think
- James Colley
- Nov 20
- 15 min read
Most people don’t open a therapy app while thinking about metadata, device identifiers, or third-party analytics frameworks. They open it because something in their life feels heavy or unmanageable. A relationship is fracturing, or anxiety keeps pulsing late at night, or burnout has hollowed out the ability to cope. In those moments, a therapy app feels like a quiet doorway to relief. And because these apps present themselves in a tone that resembles healthcare—gentle colours, soft assurances, clinical language—users naturally assume that their information is protected by laws and norms similar to a therapist’s office.
It isn’t.
The world of mental-health apps sits in a regulatory grey zone where medical-grade intimacy coexists with consumer-grade data extraction. Apps can collect far more than what users type. They can record where users are, how long they stay, when they return, why they return, and what behavioural patterns emerge over weeks, months, or years. They can legally share or monetise some of that information because many of these apps are not treated as healthcare providers under US law. Most users have no idea. They assume confidentiality. What they get is something closer to the modern advertising ecosystem.
This tension—the gap between emotional vulnerability and technological reality—is at the heart of why therapy app privacy matters far more than most people realise. To understand the scale of this issue, it helps to begin with the most fundamental question users ask, often quietly: is my data really private?
A foundational exploration of AI mental-health tools and how they function sits in your resource library, titled AI Therapy: The Complete Guide to the Future of Mental Health Support. Understanding how AI is used in mental-health technology provides essential context for understanding how these same tools collect, store, analyse, and sometimes transmit sensitive user information.

What Data Do Therapy Apps Actually Collect?
When users ask, “Is my data private?”, they often imagine the question is about the content of their therapy sessions—the words, confessions, and emotional disclosures they type or speak into the app. And while this content is indeed collected and stored by many platforms, it is only one small layer of a much larger system of data collection. In reality, therapy apps often gather three primary categories of information: personal mental-health information, behavioural usage patterns, and device-level identifiers. Each category on its own is sensitive. Together, they form an extraordinarily detailed picture of a user’s emotional life, daily rhythms, vulnerabilities, and identity.
A 2019 study in JAMA Network Open found that the majority of popular mental-health apps transmitted data to Facebook or Google through embedded software development kits, often without clear disclosure in their privacy policies. Consumer Reports reached similar findings, discovering that apps marketed as private mental-health tools still sent user identifiers and app-interaction events to major advertising platforms. And Mozilla’s 2023 Privacy Not Included report concluded that most mental-health apps they examined had “worse or very concerning” privacy practices—an assessment made after reviewing data-sharing statements, connection logs, and the presence of third-party trackers.
Against this backdrop, two expert insights capture the heart of the issue.
A Head of Data Science we interviewed described the illusion of safety created by so-called anonymised data:
“Anonymised doesn’t mean anonymous. You can remove names, but behavioural patterns, emotional rhythms and device identifiers are fingerprints. They re-identify people quickly when combined with external sources.”
A CTO explained the unseen dimension of therapy app interactions:
“The content of therapy matters, but so does the metadata—when someone logs in, how long they stay, what time of night they return, or how their behaviour changes during stress. That metadata can be more revealing than the session itself.”
With this context in mind, we look closely at each of the three data layers therapy apps often collect.
Personal Health Information and Mental Health Data
When a user signs up, they are typically asked to complete an intake questionnaire that resembles the paperwork of a therapist’s office. But the difference is enormous: in a clinic, those disclosures are protected by medical confidentiality laws; in an app, they are governed by a consumer privacy policy—an entirely different legal framework.
The intake process often asks for details about anxiety severity, depressive symptoms, trauma history, panic attacks, suicidal thoughts, medication regimens, sexual orientation, gender identity, relationship stress, sleep disturbances, or past therapy experiences. These are the most intimate elements of a person’s emotional life. Many apps store this information by default, often indefinitely unless users request deletion.
Beyond intake, the app collects the content users generate while seeking help—typed conversations, voice messages, journal entries, emotional check-ins, and reflections. These disclosures often reveal patterns of thinking, maladaptive beliefs, emotional triggers, and psychological vulnerabilities. They form long-term behavioural records that are immensely valuable to algorithms, advertisers, and third-party analytics systems—not because of what the user said, but because of what it reveals about their internal world.
Some apps also collect data related to health history, including medications taken, diagnoses received, treatment preferences, and whether users have been hospitalised for mental-health reasons. This information is considered extremely valuable in the advertising world because it helps companies target vulnerable consumers with precision. Brookings highlighted cases where mental-health apps used collected data to train unrelated AI tools or shared data with advertisers for behavioural targeting, demonstrating the fragility of user privacy in this space.
It’s important to emphasise that users rarely understand how much they are surrendering. Therapy app branding leads people to assume medical-grade confidentiality when the reality is closer to the metrics-driven ecosystem of wellness technology.

Usage Patterns and Behavioural Metadata
Metadata—“data about data”—is often more revealing than the content it surrounds. While a message might say “I’m feeling overwhelmed,” behavioural metadata can show that the user opened the app at 2:13am, stayed for eight minutes, returned again at 3:47am, and repeated this pattern four nights in a row. Those details say far more about someone’s emotional state than the words alone.
Metadata captures when users log in, how long they stay, how often they check in, which tools they open, how many messages they send, how quickly they respond, whether their behaviour changes during stressful life phases, and whether their usage spikes during loneliness or insomnia. It records whether people linger on grounding exercises, skip meditation tools, or repeatedly view crisis-related content.
A Head of Data Science explained that metadata forms “an emotional rhythm signature.” Even if two people type identical sentences, their metadata differs. It reveals patterns. It reveals habits. It reveals vulnerability.
Location data is another layer. Some apps do not explicitly need location permissions, yet still infer rough location through IP addresses or device identifiers. That geographic information can be sold or shared through third-party partners, allowing companies to map emotional behaviour to physical places.
A 2022 analysis published on Springer’s Software Quality Journal examined dozens of mental-health apps and found widespread vulnerabilities, including insecure cryptography and permissions that allowed apps to gather unnecessary behavioural information. The study concluded that these vulnerabilities created a “high risk of user profiling” because the apps’ design allowed external parties to reconstruct user journeys across multiple contexts.
This is metadata at work—not the user’s explicit content, but the behavioural patterns surrounding it.
Device Identifiers and Technical Information
Every phone has unique identifiers. These identifiers don’t contain your name, but they are effectively tags that allow companies to follow your device across multiple apps, websites, and advertising networks. Therapy apps commonly contain embedded trackers from Meta, Google, Snapchat, TikTok, or analytics companies like Mixpanel or Amplitude.
These trackers collect device IDs, advertising identifiers, IP addresses, crash events, screen views, and app-interaction logs. They are the route through which therapy app usage can end up inside advertising profiles.
Consumer Reports found that several mental-health apps transmitted device identifiers to Facebook even when users were not logged into Facebook. STAT News reported on mental-health apps that used device-level data to fuel advertising optimisation pipelines unrelated to health.
This process, known as cross-app tracking, allows companies to infer that a user engaged with mental-health content, opened a therapy app multiple times at night, or repeatedly accessed stress-related tools. They don’t need your name. They need your device.
This is why mental-health metadata is considered commercially valuable: it predicts behaviour with extraordinary accuracy.
To understand how modern AI tools interpret emotional and behavioural patterns—and how responsible systems manage this safely—your resource How AI Is Used in Modern Mental Health Apps is an excellent supplementary read.
Why HIPAA Doesn’t Protect Most Therapy App Users
People often assume therapy apps operate under HIPAA. They don’t. HIPAA applies only to healthcare providers, insurers, and a narrow group of health-related vendors. Most therapy apps exist outside this universe because they structure themselves as wellness, coaching, digital health, or AI-driven self-care tools.
A General Counsel described this as “covered entity confusion”:
“Users see therapy language and assume the law sees a healthcare provider. Legally, it’s just software unless it is explicitly delivering clinical care under a healthcare licence.”
This is where misunderstandings become dangerous. Even when therapy apps employ licensed therapists, many separate the therapist-user relationship from the platform’s broader data collection. The therapist messages may be protected, but everything users disclose during signup or daily app engagement may not be. Intake responses, usage patterns, and device identifiers often sit outside HIPAA protection completely.
A CTO noted that companies should treat all user input as protected health data, even when not required by law, because regulators appear increasingly likely to expand definitions of health data to include emotional and behavioural information. But many apps take advantage of the current ambiguity to collect and monetise more than users realise.
The Covered Entity Loophole
HIPAA was written in the 1990s for a healthcare system that did not imagine smartphones, AI-driven therapy models, or millions of people turning to apps instead of clinics. Under HIPAA, only healthcare providers, insurers, and defined medical vendors are “covered entities.” Unless a therapy app is directly delivering care under a medical licence, or acting as a business associate to a licensed provider, it does not fall under HIPAA’s jurisdiction.
This means a therapy app may look and feel like a clinical environment, but the law sees it as a piece of consumer software. Even apps that feature licensed therapists often structure the therapeutic interaction so that only the therapist’s messages are protected, while the platform continues collecting behavioural data, device identifiers, intake responses, and user habits under the umbrella of a generic consumer privacy policy.
A legal expert described this structure as “a loophole big enough to walk a server farm through.” Many apps advertise HIPAA compliance in their marketing materials, but that compliance often applies narrowly to in-session therapist communications, not to any of the data surrounding that relationship. Intake forms, engagement metrics, app activity logs, device behaviour, and metadata may all fall outside HIPAA’s protection entirely.
Business Associate Agreements (BAAs), which are designed to restrict how data is handled for clinical care, apply only to very specific workflows. Even when present, they govern only the information processed for healthcare operations—not the broader universe of analytics, marketing optimisation, software improvements, or third-party integrations that most therapy apps rely on. A user may believe their entire emotional life is protected, when in reality only a small slice of it is.
This confusion is not the user’s fault. The line between therapy and wellness has been blurred by the design of these apps, and by the emotional intimacy they facilitate. People share things in these platforms that they would be reluctant to tell a close friend. Yet, without HIPAA protection, that information may legally flow into the advertising ecosystem.
What This Means For Your Mental Health Data
The legal gap produces consequences many users never anticipate. When an app is not bound by HIPAA, it can share data—with varying degrees of transparency—with advertisers, analytics companies, data brokers, and social media platforms. Because many apps define their data practices in broad, permissive terms (“for service improvement,” “for marketing optimisation,” “for personalised experiences”), users often have no idea how widely their information may travel.
It is perfectly legal for a therapy app to sell de-identified emotional data to a data broker. It is legal to share device identifiers with an advertising platform. It is legal to allow Meta or Google to collect app-interaction events that can later be used to target users with ads linked to their emotional patterns. It is legal for an app to share information about how often users seek help late at night or how frequently they access anxiety-related tools—because this is not considered protected health information under HIPAA.
A Head of Data Science explained the practical implication: “Once emotional data enters the commercial ecosystem, it can be copied, resold, analysed, and recombined infinitely. At that point, control is gone.”
If such data is mishandled or misused, HIPAA provides no recourse. Users cannot file HIPAA complaints. They cannot claim HIPAA violations. They cannot rely on medical-grade privacy protections because, legally, they were never being treated as patients. They were users of a consumer technology product.
This reality is uncomfortable because it means the protections people assume are simply not there.
How Therapy Apps Share Your Data With Tech Companies
When people hear “data sharing,” they often imagine therapy transcripts flowing straight into Facebook’s servers. But the reality is subtler—and more expansive. Most therapy apps share information through embedded tracking frameworks and advertising SDKs that transmit device identifiers, app events, timing patterns, and behavioural signals to companies like Meta, Google, Snapchat, Pinterest, or Criteo.
A Head of Product described it as “the advertising inference problem”: even if an app never shares therapy content, the simple fact that someone uses a mental-health app is itself valuable information. If a user opens an anxiety-focused tool at 1:43am three nights in a row, or repeatedly engages with a feature designed for people experiencing panic, advertisers can infer vulnerability without ever reading a word of the user’s messages. From there, ads for supplements, alternatives to therapy, financial services, or questionable wellness products can be targeted at moments of emotional sensitivity.
A General Counsel called this phenomenon “consent theatre.” Privacy policies and terms of service technically obtain user consent to broad data sharing. But nobody reads those documents, nobody understands the implications, and nobody imagines that opening a therapy app may feed an advertising profile that later influences what products they’re shown. Legal consent exists. Informed consent does not.
Enforcement actions have revealed cases in which therapy apps transmitted hashed email addresses, device identifiers, or behavioural events to tech platforms for marketing analysis. Some apps shared details about users signing up for certain mental-health services, or about their engagement with specific therapeutic modules, which allowed platforms to categorise users as being interested in “stress relief,” “anxiety help,” or “wellness recovery.” These categories are then used within advertising frameworks to target users across the internet.
Understanding the advertising ecosystem is key to understanding mental-health data flow: therapy apps often act as the starting point for a cascade of commercial activity.
Who Receives Your Data
A wide range of companies can receive information—directly or indirectly—from therapy app usage. Social media platforms like Facebook, Snapchat, or Pinterest may receive device identifiers and behavioural triggers that help them classify users into interest categories. Advertising networks such as Google or Criteo may use device-level signals to refine ad targeting across millions of sites. Analytics companies track how people behave inside the app, collecting information about user journeys, emotional rhythms, and feature interactions.
Data brokers pose an even larger concern. These companies collect fragments of identity, behaviour, location, and psychology from thousands of sources, and resell aggregated profiles to advertisers, insurers, political campaigns, and other organisations. Once mental-health-related data enters this ecosystem, it becomes almost impossible to delete because it spreads across companies that users have never heard of and never directly interacted with.
Researchers may also gain access to “de-identified” datasets. But de-identification is often reversible. Studies have repeatedly shown that behavioural metadata can re-identify individuals with ease when combined with location data, app usage history, and demographic information.
This interconnected web means that therapy app usage can influence how companies across multiple industries perceive, classify, and target an individual—even when the app never explicitly shares therapy transcripts.
Why Companies Want This Data
Mental-health data is valuable because it reveals emotional vulnerability, purchasing behaviour, and susceptibility to certain messaging. Behavioural economists have long understood that people make different decisions when stressed, anxious, lonely, or depressed. Advertising systems built by large tech companies aim to predict those vulnerable states in real time.
A Head of Product explained: “Knowing someone is anxious matters. Knowing when they are anxious is priceless.” Patterns like late-night app usage, repeated searches for comfort, or spikes in depressive content engagement signal when someone may be more easily influenced by specific types of ads.
Data brokers combine emotional signals with financial stress indicators, location histories, and online browsing to create enriched profiles. These profiles allow marketers to target users based not just on interests but on emotional disposition. Platforms can identify when someone may be primed to buy certain products, click certain links, or engage with emotionally charged messages. This data also has resale value: brokers package mental-health-related signals and sell them to companies that want to target “high-stress individuals,” “anxious users,” or “people seeking relief.”
This is not hypothetical—it is how the advertising ecosystem already works.
What Happens When Mental Health Data Gets Breached or Misused
Breaches happen across industries, including reputable tech companies. When mental-health data gets exposed, the harm can be severe. Emotional disclosures can surface in public data dumps. Location histories tied to therapy usage can reveal patterns a user would never want made public. If device identifiers are stolen from advertising frameworks, malicious actors can combine them with leaked email lists to re-identify individuals.
A Head of Data Science described this as “the downstream risk problem”: once data enters the ecosystem, no single company controls it. A breach at an analytics provider, advertising partner, or data broker can expose information originally collected by a therapy app. Even if the app itself is never hacked, the risk persists.
Discrimination and employment risks
Employers and insurers increasingly analyse behavioural data in aggregate. A dataset that suggests emotional instability, insomnia, or depressive patterns could theoretically influence hiring decisions, promotion opportunities, or insurance pricing models. While direct discrimination based on mental-health conditions may be illegal in some jurisdictions, the use of behavioural proxies remains largely unregulated.
Identity theft and financial fraud
Mental-health apps often collect personal details such as names, birthdates, email addresses, and payment information. Combined with behavioural data, this forms a rich profile for identity thieves. Fraudsters may exploit vulnerability signals to target people with scams, manipulative offers, or phishing attempts. Criminals often favour emotionally distressed individuals because distress correlates with impaired decision-making.
The psychological harm of a mental-health data breach can exceed the financial harm. People experience shame, fear, and a loss of safety when their private struggles become exposed.
How To Protect Your Privacy When Using Therapy Apps
Protecting privacy requires intention, not paranoia. Start by reading the parts of privacy policies that matter most: what data is collected, who receives it, whether it’s used for advertising, and whether the company uses third-party analytics. Look for language that hints at broad sharing, vague purposes, or unclear definitions of partners.
During setup, treat privacy settings as part of the onboarding process. Disable unnecessary permissions. Avoid using social sign-in options with accounts like Google or Facebook, which create links between your therapy usage and broader identity. Restrict location access unless essential. Turn off marketing preferences. Review which apps your device allows to track you.
Most importantly, choose apps that demonstrate respect for user privacy. Look for transparent disclosures, clear explanations, third-party audits, minimal data collection, and straightforward deletion options. Apps that embed advertising frameworks or rely heavily on behavioural analytics may not align with the privacy expectations users have when seeking mental-health support.
If you want a deeper understanding of privacy risks and ethical guidelines for organisations deploying mental-health technology, the therappai resource Ethical AI Therapy: How HR Can Safely Deploy Mental Health Technology is a valuable resource.
How therappai Approaches Therapy App Privacy Differently
While many therapy apps treat privacy as a compliance issue, therappai treats it as an ethical foundation. The platform is built around the belief that emotional data is not a commodity and should never be monetised, analysed for advertising, or shared with third-party marketing infrastructure.
The CTO describes therappai as “a clear box in a black box world.” Users see exactly what is collected, why it is collected, and how it is stored. No obfuscation. No manipulation. No dark patterns. Unlike many apps that rely on server-side analytics tools which automatically send data to large tech companies, therappai consciously avoids embedding ad-tech SDKs or behavioural trackers that would allow external companies to infer user emotional states.
A second principle guiding therappai’s design is the choice of “privacy over speed.” Some privacy-preserving architectures are slower or more difficult to build. On-device processing requires more engineering investment. Strict data silos require more infrastructure. But these choices protect users by ensuring no single system has access to all information, and that no external analytics platform can map mental-health disclosures to behavioural patterns.
A third principle is what the CTO calls “the wall of separation.” Therapy session content is stored entirely separately from engagement analytics. Even internally, engineers cannot correlate a user’s therapeutic disclosures with their usage patterns. This ensures that therapy content remains private not only externally, but within the company itself.
The platform collects only the data required to provide care. Storage is encrypted end-to-end. Users can access and delete their data at any time. All practices undergo regular third-party privacy audits.
To learn more about these principles, readers can explore the therappai Trust Centre at: https://therappai.trust.site/
Get Early Access to Privacy-First Therapy
Your mental-health journey deserves the same level of protection as your physical health, your identity, and your most intimate personal spaces. Therapy apps should support healing, not expose vulnerability. They should build trust, not extract data. They should prioritise privacy, not advertising pipelines.
therappai is built on that belief.
If you want to experience a therapy platform designed around privacy, dignity, and genuine care, you can get early access—without sacrificing your safety or your emotional confidentiality—at https://www.therappai.com.
Your inner world deserves protection. Your privacy deserves respect. And your mental-health support should never be a data source.This is what therapy apps must become. This is what therappai is building.

External Citations Index
1. JAMA Network Open — Mental Health Apps Sharing Data
Study showing that 29 of 36 mental-health apps shared data with Facebook or Google through embedded trackers.https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2730782
2. Consumer Reports — Mental Health Apps Transmitting Sensitive Identifiers
Investigation revealing that apps sent device identifiers and interaction data to Meta/Facebook, even when users weren’t logged in.https://www.consumerreports.org/health/health-privacy/mental-health-apps-and-user-privacy-a7415198244/
3. Mozilla Foundation — “Privacy Not Included” Mental Health App Audit
Report showing major privacy failures in 17 of 27 mental-health apps, including problematic tracking, vague policies, and data sharing.https://foundation.mozilla.org/en/privacynotincluded/articles/are-mental-health-apps-better-or-worse-at-privacy-in-2023/
4. Brookings Institute — Why Mental Health Apps Need Stronger Privacy
Explains how apps “generate massive amounts of sensitive data” and how companies scraped user data or shared it for advertising.https://www.brookings.edu/articles/why-mental-health-apps-need-to-take-privacy-more-seriously/
5. STAT News — Mental Health Apps Capturing and Using Data
Reveals that many apps used the emotional data they collected “to create products that have nothing to do with health care.”https://www.statnews.com/2019/09/20/mental-health-apps-capture-sensitive-data/
6. US Senate Inquiry (Reported via HIPAA Journal)
Senators questioning mental-health app providers on data sharing practices that would be illegal under HIPAA if they were covered entities.https://www.hipaajournal.com/senators-question-mental-health-app-providers-questioned-about-privacy-and-data-sharing-practices/
7. Springer Software Quality Journal — High-Risk Data Exposure in MH Apps
Peer-reviewed analysis of 27 mental-health apps identifying unnecessary permissions, weak encryption, data leak vulnerabilities, and user profiling risk.https://link.springer.com/article/10.1007/s10664-022-10236-0
8. FTC Enforcement Actions Against Health Apps
FTC enforcement under the Health Breach Notification Rule, showing apps can violate privacy even without HIPAA.(Example: Flo Health case)https://www.ftc.gov/news-events/news/press-releases/2021/01/ftc-ensures-flo-healths-privacy-representations-are-clear-transparent
9. Privacy International — Cross-App Tracking & Vulnerability Signals
Shows how device identifiers and behavioural metadata enable cross-app psychological profiling.https://privacyinternational.org
10. Harvard Business Review — The Danger of Data Brokers
Explains how data brokers combine health, financial, and behavioural data to build profiles sold to advertisers, employers, and insurers.https://hbr.org/2020/01/what-facebook-knows-about-you




Comments