Erika Solis
Google Public Policy Fellow, Open Technology Institute
In recent years, the COVID-19 pandemic has fueled a for mental health services, putting immense pressure on the existing system鈥攁nd with the demand for treatment having far outpaced the supply of psychologists, people in need have turned to less traditional options for care.
Among these alternatives are mental health applications that use chatbots powered by artificial intelligence (AI). Initially limited to mood tracking and basic symptom management advice, these apps now leverage advanced AI to and , like low-income areas and schools.AI chatbots in mental health apps are trained to understand behavior and respond to individuals. Users can discuss sensitive issues with these chatbots, such as .
However, there are concerns regarding the handling of the data that users share with the chatbots. Some apps share information with third parties, like 鈥攁 move that can impact coverage decisions for people who use these chatbot services. They鈥檙e able to do this because Health Insurance Portability and Accountability Act () regulations don鈥檛 fully apply to third-party . Unlike more traditional healthcare providers, these apps can operate with varying levels of transparency and protection for sensitive patient data.
So, what exactly happens with user data collected by mental health apps? A deep dive into various apps like Elomia, Wysa, Mindspa, and Nuna provides a troubling answer: It depends on the app. While many mental health applications perform similar essential functions, their approach to data collection and secure storage can differ significantly. These differences may have an impact on patient data security, so it鈥檚 crucial that companies incorporating AI into mental health services are well-versed in existing privacy policies and adhere to best practices in safeguarding user data.
Apps collect two types of important user details: personal information and sensitive information. Personal information, such as one鈥檚 birthday, is 鈥.鈥 Sensitive information, on the other hand, includes data鈥攕uch as a formal diagnosis鈥攖hat, if mishandled, could compromise an individual鈥檚 privacy rights.
The app, for instance, lacks contextual distinctions in its privacy policies, failing to distinguish between ordinary- and crisis-related sensitivities. In contrast, clearly delineates its protection measures, particularly for health-related data鈥攕eparating personal and sensitive data.
Applications collect personal and sensitive information through account creation and application usage. Some apps, such as , prohibit users from deleting certain information鈥攍ike name, gender, or age鈥攗nless a user deactivates their account. The app notes 鈥渘ame鈥 and 鈥渆mail address鈥 as required data fields for account access. It may also request physical and mental health data鈥攚hich users can refuse, but at the expense of user capabilities.
AI-powered mental health apps use collected data in various ways, and some are more upfront about their usage than others. Elomia features an AI chatbot reportedly trained on therapist consultation data, but lacks transparency about how it uses this information in implementation. While its privacy policy guarantees non-disclosure of personal information to third parties, Elomia鈥檚 notes individuals鈥 data may be used for advertising purposes.
Limbic, on the other hand, primarily uses human therapists, but it encourages these therapists to save time by using its AI-based referral assistant to collect demographic information and determine current risks and harms to users prior to a conversation. The company is also expanding to incorporate an AI-based therapy assistant, Limbic Access, and encourages therapists to use this chatbot first. The primary difference is that while the current Limbic intends to use AI as a secondary measure, Limbic Access prioritizes an AI-first approach.
Data retention policies also differ significantly among platforms鈥攕ome apps retain data for as few as 15 days or as long as 10 years. Some apps, including Wysa, establish clear retention periods, whereas others, like , retain data without specifying clear timelines for deletion. And Mindspa allows users to request data deletion, but lacks explicit assurances.
Companies looking to improve their standards and prioritize consumer privacy can take specific actions. It鈥檚 crucial for platforms to clearly differentiate between personal and sensitive information and maintain transparency about their handling procedures. In the United States, companies can adopt stricter measures that prioritize user well-being, creating a standard that treats all data types equally. Improving privacy practices involves creating transparent definitions of sensitive health information and maximizing data protection measures.
‘s evaluations of mental health applications, including those centered on AI, highlight companies already on the right track. Similar audits can encourage mental health apps to fortify their privacy policies further, safeguarding consumer interests. It is essential for industry stakeholders to come together and establish standardized data transparency akin to nutrition labels鈥proposed by OTI in 2009鈥攄etailing what data health platforms collect and how it鈥檚 used.
If you’re considering an AI-powered mental health app but have concerns about data privacy, consult user reviews and assessments from privacy watchdogs like Mozilla. Platforms like Google Play Store clearly present privacy information, detailing data collection and usage policies鈥攎aking it easy for consumers to digest what privacy information is being collected. As trivial and time-consuming as this may seem, it’s crucial for making well-informed decisions about where you should entrust your data.