Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A decade ago, I was studying an extensive degree in a private university for Ana…
ytc_UgwPtBqp7…
G
Areas where humans remain essential
Emotional Intelligence and Empathy: AI cann…
ytc_Ugw7BRyPl…
G
I don't know but the first robot or human you showed us looks like P.E Teacher 🫠…
ytc_UgysdCsFE…
G
I love chatgpt tbh.. I haven't used it for school or cheating but everything els…
ytc_Ugx8fyosj…
G
Everything we do or learn can be learned. Except for our humanity. So maybe focu…
ytc_UgzO1Xvo4…
G
I was speechless in the first moment and then u realize its not some meme from a…
ytc_UgyVukf4f…
G
I think Karel Capek covered all this in the early 1920's, as I remember it, thin…
ytc_UgxlVIjxD…
G
I dunno, I think in much the same way AI can explore possibility spaces in medic…
ytc_UgzcqqGTe…
Comment
Any superior intelligence will look upon humanity as a destructive force on earth.
The only winning move is not to play.
Of course governments around the world may regulate AI heavily, but that will not control those who's intentions are to revolutionise their immediate surroundings.
I believe we are past the tipping point where either we kill the planet or AI destroys humanity.
youtube
AI Governance
2025-08-28T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwEcsd5cSbweTVUir94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxIBjkmF-dzRwB1l7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziSPTsbS8fa_Ma5ex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrTWSU-YTNuN7eK594AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxWfy_DHW5tu2CET94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVyMvz_Uwkm8OeSj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxzJy8d5iwk1FOH9Dd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwI7jatMJhxopV6t1h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgynWi93kP-me27BjAJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwQSnG-1095Vj28WPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]