Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would assume for low tier tasks this might actually happen quite soon. Or they…
ytr_UgyTcWYS8…
G
Anyone seen the movie "don't look up"?
You think it's fiction?
We're governed by…
ytc_UgxRFaZPh…
G
AI = "Artificial Ignorance".
At best, it is the equivalent of a teenager with a…
ytc_UgzLBRuw_…
G
are those real windows 11 activation codes?
ChatGPT said:
No, those were just ra…
ytc_UgzzhFiAN…
G
these comments are not it! like guys why should a robot know every detail of you…
ytc_Ugz0RoXC6…
G
Waymo isnt a ai self driving system, it’s as you pointed out a rule based system…
ytc_UgxiyqJ8v…
G
Our medical/life/disability insurance company is flooded with corporate emails t…
ytc_UgwIDAgC9…
G
So this is how the ending begins. With people treating AI like a lesser being an…
ytc_UgxgT5eqx…
Comment
I bitched out chatgpt and told it to stop being a yes man. It worked.
Then I got mad again later and told it to not interpret intentions. To only answer the question I asked directly and not add extra information. That worked well too.
It doesn't have a personality anymore and it's like a more robust Google search now.
reddit
AI Harm Incident
1750119881.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_f508xu5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_my5zxaw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_my7lthw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_my6g7bn","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_my67nk9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]