AI Use Policy
Effective date: 18 August 2025
Company: Naked Mental Health Limited (registered in England & Wales, company no. 16124798)
Registered address: 16 Gardeners Close, Maulden, Bedfordshire, MK45 2DY, United Kingdom
Contact: support@naked.health
Representative (EU, if applicable): Philip Reeve (philip@naked.health)
At a glance
- 
Not medical advice. AI supports reflection and learning; it is not diagnosis or treatment.
 - 
Content-first. AI builds on our own, validated content library; we don’t let it freestyle health advice.
 - 
Privacy-first. Journals are private and are not used to train AI models.
 - 
No ads, no sale of data. We don’t sell your data or use it for third-party advertising.
 - 
Your choice. You can turn off certain AI features and delete your data at any time.
 
1. Why we use AI
We use AI to:
- 
Generate guided exercises, affirmations, and plain-language explanations faster and more affordably.
 - 
Convert text to natural-sounding audio.
 - 
Provide gentle, optional suggestions (e.g., “people like you found breathing helpful”).
This helps us ship high-quality support tools to more people at a lower cost. 
2. Where AI is (and isn’t) used
Used in:
- 
Guided content: meditations, breathing scripts, visualisations, explainers.
 - 
Writing helpers: journal prompts, reframing examples, summaries you explicitly request.
 - 
TTS/voice: turning text into audio for hands-free support.
 - 
Recommendations (optional): content suggestions based on your in-app activity.
 
Not used in:
- 
Emergency/crisis handling. We don’t run crisis triage. If you’re at risk, contact local emergency services.
 - 
Medical decisions. We don’t diagnose, prescribe, or replace clinicians.
 - 
Advertising profiles. We don’t build or sell ad profiles about you.
 
3. Source of truth: “content over conjecture”
- 
AI outputs are grounded in our internal content (techniques we’ve written or curated).
 - 
We instruct AI to stay within those sources and avoid speculation.
 - 
When we experiment with new content, it is reviewed by humans before publication.
 
4. Data we do / don’t send to AI providers
We may send (minimised):
- 
The prompt you trigger (e.g., “create a 5-minute box-breathing guide”).
 - 
Context needed to fulfil your request (e.g., your chosen technique, difficulty, preferred voice).
 - 
Non-sensitive usage signals (e.g., “user prefers short sessions”) where you’ve enabled personalisation.
 
We don’t send:
- 
Journal entries by default. Journals are private.
 - 
Government IDs, payment card numbers (handled by app stores/Stripe via our partners).
 - 
Any data not needed to fulfil the request.
 
If you explicitly choose to summarise a specific journal entry, we will only process the text you select for that action, and we won’t use it to train models.
5. Training, fine-tuning, and model improvements
- 
We do not use your journals to train AI models.
 - 
For other AI requests, we configure providers not to use your data to train their models wherever that setting is available.
 - 
We don’t fine-tune models on individual user data. If we ever train a model on aggregated or synthetic data, it will be de-identified and risk-assessed first.
 
6. Aggregated analytics (non-identifying)
We may collect anonymous, aggregated statistics (e.g., “200 people used breathing exercises this week”) to improve features and capacity planning. These stats cannot identify you.
7. Legal bases & fairness (UK/EU users)
- 
Contract (UK/EU GDPR Art. 6(1)(b)) to provide requested features (e.g., generate an exercise you asked for).
 - 
Legitimate interests (Art. 6(1)(f)) for service quality, security, non-intrusive analytics; you can object.
 - 
Explicit consent (Art. 9(2)(a)) when you ask us to process health-related content (e.g., summarising a journal). You can withdraw consent by deleting the content or disabling the feature.
 - 
We do not run automated decision-making with legal or similarly significant effects (Art. 22).
 
(See our Privacy Policy for full details.)
8. User controls
- 
Personalisation toggle: turn off AI-based suggestions in Settings.
 - 
Data controls: export or delete your data (journals, activity) in-app.
 - 
Opt-in actions: features that touch sensitive content (e.g., “summarise this entry”) are explicitly triggered by you.
 - 
Notifications: you control reminders and nudges.
 
9. Safety, quality, and review
- 
Human-in-the-loop for new or material content categories before they ship broadly.
 - 
Guardrails: prompts and filters that block or redirect unsafe content (e.g., self-harm instructions).
 - 
Labelled AI: where feasible, AI-generated items are tagged in-app (e.g., “AI-generated from Naked’s library”).
 - 
Feedback loop: simple in-app tools to flag issues; we prioritise safety fixes.
 
10. Providers & processing locations
We use reputable vendors to deliver AI and media features (examples: OpenAI for text, ElevenLabs for TTS, Cloudflare for media delivery, Firebase for auth/analytics). Some processing may occur outside the UK/EEA with approved safeguards (e.g., SCCs/UK Addendum). See our Privacy Policy for the current provider list and transfer safeguards.
11. Retention
- 
Prompts & outputs related to your account are retained only as long as needed to provide the feature, then deleted or minimised.
 - 
Journals persist until you delete them or your account.
 - 
Provider logs may exist for a short period under their security policies; we configure no-training/no-retention options where available and have DPAs in place.
 
12. Changes, experiments & transparency
- 
We’ll announce material changes to AI features or data use in-app.
 - 
Beta/experimental features will be labelled and may have additional disclosures.
 - 
We maintain a changelog on this page noting what changed and why.
 
13. Questions or concerns
Email: support@naked.health
Postal: Naked Mental Health Limited, 16 Gardeners Close, Maulden, Bedfordshire, MK45 2DY, United Kingdom