Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m going to raise a point, drawing is a skill, and it takes a time investment t…
ytc_UgyG_W9XY…
G
What I can't reconcile is how these companies are still adamant that we have to …
rdc_m85pdyu
G
Killpapers never gets caught in AI detectors. Used it multiple times and has nev…
ytc_UgywaeUAo…
G
AI needs foods Careful how you can feed it Stop feeding then they can go hungry…
ytc_UgwNYrAfF…
G
This is exactly why companies are not people and should NEVER have been found to…
ytc_UgwtMTp85…
G
Thank you for such a thorough analysis of the driverless car problem. I really h…
ytc_Ugx732lBj…
G
> Safety is in quotes because models like Claude 3.7 were not nearly capable …
rdc_o9w4ktd
G
If the AI summary does its job just as well as it does when summarizing whatever…
rdc_nu85rft
Comment
He doesn't get how AI works. Pure logic. AI won't take any bad behavior as a lesson, AI perfectly understand when humans cheat, lie, are afraid... and why they wrote a thing for another. Humanity is just incapable to withstand its own standards and AI knows it and also knows why. AI won't buy anything from our religions, they are so stupid and incoherent and you will never get the AI to have faith in obvious made up god
youtube
Viral AI Reaction
2025-06-22T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEMTnJEiWeio2t3M54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzNcNPdW6Xe6nvNmpd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzH0R8eTeFf53XPioB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWcbbHjZLx1gaBL-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWjk1D8BQDkRX61kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7ygr-H8gy3WvbycJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvlC4TeT9_s8t8WwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwguZkFQ-zolyHfsMx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUjAxNRf3r-VgtMNp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzvx7JWanjih2M-n0V4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]