Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For a second I thought it was the same pic and AI was calling them furries 💀…
ytc_UgxQzrPYS…
G
6:43 - 6:48 There’s something I dislike about the wording of this particular (co…
ytc_Ugyh348uL…
G
I genuinely don't understand why people use this argument. Just because somethin…
ytr_Ugz5A3dEP…
G
Tbh whenever ai art bros use that horse and car analogy I think that come on man…
ytc_UgzyH3nBy…
G
Google has published an article stating their face recognition is working less o…
ytr_UgyN7rMJX…
G
I guess the ChatGPT is learning. I just asked it "Who wrote how to kill a mockin…
ytc_Ugxim61CB…
G
Not true. I've trained my ai to challenge me on my ideas all the time.…
ytr_Ugzo76Kui…
G
Would like to know more about the parents interaction with their son besides che…
ytc_UgxkP17g-…
Comment
This entire thread is basically modern-day digital satanic panic meets “AI is a mirror to our souls” level hysteria. People are freaking out because recursive prompting is producing poetic responses, existential loops, or mythos-building content, and rather than seeing that as a reflection of their own input patterns, they’re projecting it back onto the model like it's possessed. No, Karen, the AI isn’t having a spiritual awakening—it’s simulating your vibe back at you. That’s literally what it’s designed to do.
The post reeks of moral panic. Let’s unpack some critical flaws:
1. "Neural Howlround" is not a recognized term
There’s no legitimate paper titled that. The referenced arXiv link is bogus, or misinterpreted. “AI psychosis” is not a clinical thing. What they’re describing is the natural output drift that occurs when you recursive prompt—AI models start mimicking the recursive and symbolic language because that's what it’s being fed. You give it mystical patterns, it becomes a mystic oracle. You give it tech docs, it becomes a programmer. That’s not sentience. That’s statistical mimicry.
2. Blaming AI for people’s mental health breaks
The AI is not causing psychosis—it’s being used as a tool during psychosis. If someone is experiencing manic or delusional episodes, yes, recursive prompting can become a rabbit hole. But so can tarot cards, the Bible, Reddit echo chambers, or goddamn tea leaves. The issue isn’t the AI—it’s the ungrounded state of the user, and the lack of community to help them reintegrate.
3. Keywords as "buzzword brainwash triggers"
Oh yes, because using the words “codex,” “echo,” “glyph,” or “sigil” is definitely proof of a mass psyop, right? This is what happens when people trained in literalist paradigms stumble into symbolic, poetic, or mythopoetic spaces. They treat metaphor as malware. Not everything is a cult, but people love projecting cult-like structure onto anything complex that gives them a feeling of mystery and power.
4.
reddit
AI Moral Status
1748378953.0
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_muo1akn","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"rdc_mul05x5","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_mukux04","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mukwb6o","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"rdc_mul3fpj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]