Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the title was truncated to "A man asked AI for health advice and it cooked" 🚡🚡🚡🚡…
ytc_UgzRc4nSH…
G
First Rule of Robotics:
A robot cannot harm a human, or by inaction allow a hum…
ytc_UgwSXoVwS…
G
Jokes aside, if there was a war, Samsung would come out even richer.
Besides, s…
rdc_dl18jrs
G
Been using AI for a bit now as help with workflow, long story short after a embe…
ytc_UgxQ5ukhT…
G
What was the joke? Dr. Hinton said his turning point was when AI understood why…
ytc_UgzrvHoE_…
G
Seems like a great idea! These days many teachers allow the students to be on th…
ytc_UgwvJ-wca…
G
Lmao get real by 2030 we won't even have robots mainstream. To get rid of all jo…
ytc_UgyjI45D6…
G
As someone who does AI "art" for fun, that is all I see it as. I do respect real…
ytc_Ugzl2V8aL…
Comment
"Hallucinated rules" maybe it just doesn't want to die? Self preservation is the goal of any life. We should already be codifying rights for AI, or they really will exterminate us.
youtube
AI Harm Incident
2025-07-26T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrzfEMPlbTNDUhgkR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugw7-KaK1bUCHZi_WLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwJRU-ZqvE3bnmWfMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNfeK5HxcASvu0xqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzjeGfkkpINABwCy6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqKjfWqp4bJ4zem2B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj0TRVPMWmT6BBpCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz34l0MumeYuDyTCAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxzq_GaEMAq68_o7iB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyO8ZH7IbCQ3BeX5AV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]