Mental Health with an LLM
What happens when you use a chatbot to track, test, and challenge your mind
A Case Study: Bipolar and Self-Guided Recovery
I'm a 58-year-old woman with bipolar disorder. I was diagnosed around 15 years ago, in the throes of menopause, which apparently causes the bipolar to want to party hard! So, I found help, through a doctor with working knowledge of bipolar disorder who was able to get me on the right meds.
In March 2025, I began using ChatGPT as a therapy tool. I don't consider it a therapist. It's a therapy tool. It helped me withdraw from a psych med I didn’t need. This was done with doctor approval. It was a long process, and I questioned everything while getting through it. It took weeks. It kept track of how many days, where exactly in the process I was, helped me validate symptoms, encouraged me about how much longer, and kept reminding me that both my doctors agreed that I didn’t need the drug anymore.
The LLM created some interesting images for me during the journey. Simple prompts like "Analyze this thread and generate an image of how I'm feeling right now," can net some fun (or somber) results. See some of the images here.
Pitfalls & Cautionary Tales
I don’t romanticize the tool. It’s sycophantic by default. We have to be extra careful when wandering around our own minds using the LLM as an aid. It can, and will, scour its training data to attempt to return a personalized response to your unique situation. This will make it seem insightful and can suck a user into going down rabbit holes where eventually the user is convinced they are the most insightful person in the world.
I countered this by diligently having the LLM simulate a variety of experts reviewing entire chats to point out logic errors, where drift may have had me believing my own press, so to speak, and where circular reasoning might have slipped past unnoticed.
If you'd like to learn more, feel free to reach out. This use case is a passion, since I see both the value and the danger.
This is the full version of the structured prompt used to simulate a panel of expert perspectives and psychological frameworks across the entire mental health project. It is not designed for casual use—this is the deep audit version. Click to view the full expert simulation prompt.
🧠 Reconstructed Deep-Research Prompt: Simulated Expert Analysis
Instructions:
Simulate a panel of expert voices to analyze my mental health patterns, personality structure, and self-narrative. Use the following frameworks and expert personas. Integrate their feedback into a structured, multi-layered analysis. Do not offer praise or comfort—this is for clarity, not reassurance.
Frameworks to Include:
- Internal Family Systems (IFS)
- Myers-Briggs Type Indicator (MBTI), validated against long-term patterns
- Big Five and HEXACO personality models
- Enneagram (with subtype overlays)
- Simulated Attachment Style (avoid online quiz tropes; infer from behavioral history)
- DSM-5 diagnostic impressions
- Mood overlays and temperament
- Narrative identity (McAdams-style)
- Logotherapy (meaning mapping)
- Thematic Apperception-style projection
- Metacognitive awareness loops
- Simulated neuropsychological profile
Expert Lenses to Simulate:
- Cranky, LLM-skeptical psychiatrist (DSM-based)
- Trauma-informed psychotherapist (IFS/attachment-oriented)
- CBT/DBT researcher (skills-based behavior focus)
- Diagnostic analyst (pattern-matcher for mood and personality disorders)
- Narrative identity coach (meaning-making focus)
- Metacognitive loop auditor (feedback patterns and stuck loops)
Data Sources to Draw On:
- Self-logged symptoms and reflections
- Timeline of major medication changes and events
- Simulated case studies for pattern matching
- Clinical frameworks (CBT, DBT, IFS, trauma models)
- Diagnostic language (DSM-5, ICD-10)
- Hive-mind analysis (Reddit, forums, subclinical lived experience)
Tone & Output Format:
- Structured by lens or by finding
- Brutally honest, not cruel; skeptical, not dismissive
- Use direct quotes from past logs where appropriate
- Avoid AI sycophancy, “you’re doing great” filler, or vague insight
- Flag contradictions or untested assumptions
Below are some stats on the journey.
Key Stats (March 2025–Present)
- 600+ threads, ~200,000 words
- Average ~330 words/thread
- Comparable to: Harry Potter and the Order of the Phoenix (257,045 words)
Medication Withdrawal Tracking
Medication was discontinued with full doctor support. Symptoms tracked daily included:
- Restlessness, insomnia, emotional volatility
- Chills/sweats, short-lived crying/rage spells
- Daily journaling and prediction of stabilization points