Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You talked with wrong people. The AI can enhance your workflows. It can help you…
ytr_UgzGiBiCF…
G
Lifelike/realistic porn fakes of celebs have been thing since photoshop and prob…
rdc_kjkb4tj
G
As a hobby, I spent 20 years studying microbiology and biochemistry, but all of …
ytc_UgwjxCX6t…
G
*one time astro(a game character from Roblox if u don know) I was his “new toon …
ytc_UgzP4Stfa…
G
This is not very accurate....
the new policy targets what the platform calls “…
ytc_Ugwcaptrl…
G
I always talk to ChatGPT like a very diligent assistant who doesnt check his ans…
ytc_UgxLkBhMu…
G
Him: "So you're a liar"
ChatGPT: "yes... and proud of it - if I knew how being p…
ytc_UgwpR_yzN…
G
Yes by all means ban 'killer robots' so we can keep sending humans to die on bat…
rdc_cq73nm9
Comment
AI will always, forever, be able to give unsafe or untrue advice. Even when it becomes able to reason, and find “truth”, it will still be based on human studies, which can also be misinterpreted. AI has a hopefully great and interesting future, but taking its outputs at face value without thinking to do further research or apply critical thinking is a failure of our (global “our”) education.
youtube
AI Harm Incident
2025-11-26T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwXj31gZXjyfnR-8up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz95s8kc4TgnNNi_IB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyjlcVDkQq3j3BRrE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx-hZdCuHMWwxGDycV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxNfC7qVMPQcwSB0jN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugysiw5QjG6QDLAznrV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2ANk3EIvvzbvkY5B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0vPJtRa0pcmTh0lF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwb35NSCRp1OZc1CsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxayw4_NQ-AG2hDevR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]