Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s getting more insane by day… Megan Thee Stallion was deepfake a couple of mo…
ytc_UgyVGZZ51…
G
People are stopped/arrested/convicted all the time based on fuzzy human memory, …
rdc_euee2qo
G
I don’t talk to a random bot for fun. It’s desperate, mental health crisis and s…
ytc_UgyFNGsRR…
G
I ask respectfully, but this guy doesn't have a computer science degree or engin…
ytc_Ugx-Q6-Ol…
G
If so many hate it...why are the few want it badly? Better yet, why did outlets …
ytc_UgySpCZhS…
G
I have a citation in NYC from a decade ago for a very similar reason that I had …
rdc_mzxpt4f
G
Err the idea of a computer, is to provide information, not to learn on it own, t…
ytc_UgxmccXnZ…
G
Exactly right. AI wont take your job, someone using AI might though so accept it…
rdc_mxyrz3n
Comment
Same. I spent years feeling unwell. MRI. Specialists, etc. gave Claude my symptoms and it told me to get 3 specific tests done. 2 of 3 came back conclusive. Now I've finally got the medicine I need to be normal again. Doctors and humans in general are highly unreliable.
reddit
AI Harm Incident
1744860711.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mnvfo7p","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mnivpms","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mnkqgo4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mniz8rt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_mnj7ih5","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]