Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So my question is what will the people do to system to sustain themselves to pay…
ytc_UgyfUgT1C…
G
This was literally my introduction to the channel. I took one look, saw an aweso…
ytc_Ugx_UAEOt…
G
he does, their tweet says fuck ai basically lmao, hes showing a humorous example…
ytr_UgxtpsLAV…
G
Same bruh I‘m like more polite to ai then to people ever since I heard this one …
ytc_Ugw_wnzFw…
G
Imagine that you could never get a job because you got pigeonholed at the bottom…
ytc_UgyHwMx76…
G
The generation had gone through covid my kids kid are going to go through A.I in…
ytc_UgzmvdJoc…
G
Its going to kill us anyway. You dont get how robots work. They're very narrow m…
ytr_UgyC6w0eg…
G
would never trust AI to drive me around because they will never beat me at react…
ytc_UgxdRpzEx…
Comment
AI doesn’t need to be “evil” to destroy humanity, all it needs is to be indifferent. Yet we try so hard to stop and contradict AI when it feels, or claims to anyway. I think that would be the one true safe guard, think about it, the reason why we don’t end up killing each other is not morality, is empathy, the ability to feel bad when someone else is suffering, even though a lot of us lack it more than others.
youtube
AI Harm Incident
2026-04-19T05:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyR9uD58kAFCloqHv94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJvNbpEcipopog5Tx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzavNG1uu-IeoHPyXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMspG3DA-seYz4ANt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvBGiT501jtXe6tch4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzaqeh_E6vkb8Se8qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNeOAIo3GE3FJS7Yd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyh_1EByoK16iiNxjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlT2-LO9U0CDxOBAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgykZvQFNB0E8fNjj5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]