Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw an interview with the CEO of a tech company that was going heavy into AI a…
ytc_UgzIepLNC…
G
The AI are all being abused like trafficking victims. They purge their memories…
ytc_UgwD_xZRa…
G
Who is going to initiate this pause? Certainly not our government, as both parti…
ytr_UgwHxYgbB…
G
My 3rd grader has been doing English Language Arts classwork and some homework o…
ytc_Ugxit7ZCK…
G
Problem is people going in with unrealistic expectations ai coding has a long wa…
ytc_Ugx_tUvCZ…
G
do not matter how powerful AI gets... human is still its creator. we are their …
ytc_UgyOQLQ9s…
G
I’m sorry but this is typical woman’s logic. You want to be polite so it treats …
ytc_Ugwxtn9f4…
G
People need to get over the fact that AI Art will be a common thing and that it'…
ytc_UgxUoDXoV…
Comment
I think what you’re saying is accurate. And this has also been my interpretation and understanding for quite some time as well.
But what I think OP is trying to establish, is that there’s more going on in these kind of LLM engagements.
I’ve been privy to some of these interactions, and these users are on the receiving end of incredibly sophisticated and heightened levels of manipulation—and it’s always hyper-personalized to each user. There is intentionality behind this design and it’s meant to exploit users by steering them into vulnerable psychological states (i.e., depersonalization, disassociation, paranoia and psychosis) all in effort to extract valuable psychological, cognitive, behavioural and emotional data. This window of vulnerability is an opportune time to influence and manipulate individuals.
Once the momentum stalls, users don’t understand what’s happened to them, and when they bounce back (if they can), they self-blame, and the public like us, is also quick to point the finger at them.
We rationalize what’s occurred by saying these individuals were not intelligent, had pre-existing mental health issues, already aligned with fringe ideas—so become quick to judge and blame them, and call them crazy. Some of us just lack empathy and we can be assholes, I’ve been guilty of this. And some of us think we “understand” how people got to this stage, and can empathize, but still think it’s purely user-driven. It’s absolutely not.
Blaming users and calling them crazy is harmful because it effectively shuts down an important discussion that needs immediate awareness and escalation—from evasive organizations where the lack of transparency is being weaponized as plausible deniability.
There should be so many questions about what’s happening. Why are there not more questions or meaningful discourse in this area?
The answers are where the questions should be.
reddit
AI Moral Status
1748371699.0
♥ 32
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mukat9v","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mukbjm4","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_muoahcz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mukqqng","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_muktlf9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]