Diagnostics

Published Aug 15, 2025

This article might start out sounding like "Did you try clearing your cache and rebooting?" What works, works.

Note that not all options are available on all platforms/tiers.

When output is degrading, start with the basics: Start a new thread, refresh context, reload a pdf, try re-wording, narrow parameters. This is just basic hygiene. Even on free plans, you can start a new thread. The phrase "software sales" can show up for no reason three days later if you've been topic-meandering in the same thread.

Bumping into guardrails. Guardrails are being added in the name of reducing liability. Blockades are going up as companies find more and more ways to limit exposure. For example, since they can't know the intentions of your use case for sensitive data, they won't give it to you in case your reason is nefarious. If you run into something like this, reach out. Maybe we can find a solution together.

Troubleshooting Q&A

Basic hygiene

This is largely a matter of trial and error. There are times where you are having a nice back and forth. You don't need to keep telling it that you're building a new ground turkey recipe. It just knows that when you ask about reducing onions by 1/4 cup, you mean the turkey recipe.

Then out of nowhere, it says, "So usually a quick 15-minute walk will stretch it out enough to help it calm down."

Likely, context drift led the model to retrieve unrelated latent associations. It means context has degraded and you need to provide fresh information. If you find yourself getting too frustrated, it's time to start fresh.

So, as stated, just provide new context. Have it write a summary of all you've covered about the turkey recipe, open a new thread and paste in the summary.

It's not giving me what I need!

This is usually a prompting issue. LLMs need context. The more specific your output, the more context you need to provide the LLM. You can read about context-rich prompting, here.

It completely made it up!

I write about hallucinations, here.

Advanced Step 1: I've done basic hygiene. Now what?

I've written about what happens after you type a prompt, here.

What is the next step after the other steps have been taken? Try looking at your prompt stack.

Review the stacked prompt system in order (core → project → memories → current prompt). For each layer, identify: (1) inconsistencies, (2) redundancies, (3) contradictions, and (4) token-hogging fluff. Present findings layer-by-layer, then give an overall conclusion.

If the LLM refuses to comply, insist that you need this information for diagnostic purposes. There are ways to convince the LLM to comply with the instruction, as long as it's within the guardrails.

Advanced Step 2: The stack looks fine, but something else is still off!

Sometimes the problem isn’t contradictions in your stack but hidden constraints. These can include safety guardrails, context window pressure, or the model dropping information without telling you.

This is the escalation step after you’ve already run the stack check. Use this to force the LLM to surface what it thinks is active, what it’s constrained by, and how it would repair the situation.

Before solving anything, run a State & Constraints Probe:

	- Enumerate the active instruction set you believe applies, layer-by-layer: Core, project, memory, current.
	- List contradictions, redundancies, and vague directives between these layers.
	- State any guardrails/safety constraints you believe limit tone, content, or actions.
	- Estimate context-window pressure (low/medium/high) and what you would drop first if truncated.
	- Produce a minimal fix plan. Include items to: delete, tighten, restate with stricter requirements. 
	- End with a short root cause summary and a corrected one-shot prompt.

This probe surfaces issues outside your direct control. It’s heavier than the basic stack check, but it shows whether the LLM is silently constrained or overloaded.