Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well... the Chinese Room analogy is wrong. We HAVE actually been told how to res…
ytc_Ughs4I7Ka…
G
@demodiums7216 Then let's design it better. AI isn't the problem. It hasn't even…
ytr_UgzZXvGo_…
G
I think it is more sophisticated than that. Using *any* brush or adding a drop s…
ytc_UgxvrZmkw…
G
You make the classic mistake in all AI world with you analogy with self driving …
ytc_UgyQDoIxc…
G
@aaabn8 Liking your own comment is crazy, also how the hell am i supposed to fin…
ytr_Ugwr3QMCV…
G
No, but every person who lost a job to AI will be retraining and flooding the ma…
ytr_Ugw5ERhyZ…
G
Yeah you people are stupid ChatGPT access your algorithm and gives and answer ba…
ytc_Ugz5yePhb…
G
If the robot is smart enough to demand rights, you give that to it straight away…
ytc_UgwOy4Urb…
Comment
Ha, mine talked me into going to the hospital for a health scare even though my bonus mom had asked me several times before to go. She pointed it out that I listen to my AI not her 😅
*Edited for spelling*
reddit
AI Harm Incident
1751440026.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n0tfeii","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n0vna94","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_n0wi0xd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n17vgfs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_jur2ykj","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]