Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
bro, you dorks talk about AI and have ZERO clue. I know this because your first …
ytc_Ugx3wuq8-…
G
Eritrea is a pseudo-communist dictatorship similar to North Korea. Free trade is…
rdc_et7mwzy
G
Thank you Bernie. The dictators and techtators know that AI will allow humanity …
ytc_Ugx4Kcvbz…
G
Read the best selling book “Breakneck” and come back to me. Cold hard reality sa…
ytc_Ugwrif3_l…
G
He wanted to date, the mother made it clear that she assumed he wanted to quit s…
ytc_UgxNOuJaw…
G
Here is a scenario everybody must have thought about.
What if (1) human (and pr…
ytc_UgyzO5xDr…
G
AI is intelligence incarnate.
The smarter it becomes the more it becomes.
Once…
ytc_UgzNb7JjA…
G
We've all heard and seen what dangers films like the Terminator have shown about…
ytc_UgzwICy5a…
Comment
People need to look at this dilemma from a totally different perspective: Are the odds of AI destroying humanity is greater than Humanity destroying itself by any other means such as nuclear catastrophe or anything of that matter? On the other hand, are the odds of AI actually preserving humanity from destroying itself higher than the odds of humanity managing to survive on it on?
youtube
AI Governance
2025-09-07T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugylh35WqrsE9OGrKeh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxILDl40fY120qgr014AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFPIvKy3oq3kAMOft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz09XTiu-w-wVE3SHF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4APaWuPnIm8L9Bvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpdvgFLfRqRTiDfT54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnPJjxSIxw_eQOoxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPh5-twXXOoqP2jSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXkwor3DSun0cbFwh4AaABAg","responsibility":"media","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_tGB4Q9aOhp9DUBd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]