Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Iman_savageai: Do you think we can trust AI more than humans? Sounds a little s…
ytc_Ugw3UI5zr…
G
Imagine buying art from an AI "artist" when all they're doing is pushing a butto…
ytc_Ugx_vr2EC…
G
Excuse me, *the military's new AI*? What?
edit: oh they have their own chatbot…
rdc_ntalef6
G
Basically talking about replacing some human jobs with robots. But if they were …
ytr_UgxaA2RDq…
G
Lily sounds mentally ill, and it sounds like some of her social anxiety regardin…
ytc_UgzZfd-Jb…
G
In my job we aren't allowed to use any AI tools as it's a classified application…
ytc_UgzCbA_xg…
G
Don’t forget that this is all with the cost of AI being subsidized by investors.…
ytc_UgyXaC9rw…
G
"we are really in danger when the AIs have their own infrastructure"
so what I'm…
ytc_Ugwk184Px…
Comment
For a truly superior intelligence to exist we need to introduce the "Examen" process and an act of contrition. Without these elements superior intelligence is nothing but an unthinking robot, no matter how well it simulates human traits. It is all nothing but hypocritical, I.e. events strung together in a mad existence.
youtube
AI Governance
2025-07-15T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzPq3nqDV53YPl-zXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyz8aGdK5dLTprIv214AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGa0GPiYnOxHNnBOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRpNx5it1_saM0_zR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxqjyHMMmxdb6V_SFN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkeOm5kO_AS3tWORB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugwrq7k8maFQvlnsvYd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvTI4jY0kPWjJOcP94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOqyL2Io52TOCKcxd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwpBLOoNqtvZnUUGLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]