Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The real threat is AI + unfettered capitalism. Publicly traded corporations are …
ytc_UgxFdjQEK…
G
This could also be the case for the AI judges, not basing their scores on OBJECT…
ytr_Ugw8kN0LM…
G
And you know the only way this works? the AI has to train on the voice it's imi…
ytc_UgxY3qK-m…
G
This is fake and I have more than enough reasons to prove it first of all it wil…
ytc_UgwPqcDD1…
G
The only way AI can become bad for Humanity is because of the people that progra…
ytc_UgxMtgBNh…
G
So the first step is thinning of the herd and Ai will separate you at the reques…
ytc_UgyaqjPGO…
G
AI is a smokescreen, these jobs will be transported abroad, cheaper labor. US wo…
ytc_Ugw6SX5xP…
G
Please, please, please, Everyone, especially those making important decisions re…
ytc_UgxP3orvh…
Comment
"Musk has no moral compass." If this is your conclusion based on Musk's behaviour, I certainly don't want you writing the rules for AI. Elon Musk has one of the clearest and readily observable moral compasses of anyone in public life. And it's directed toward human flourishing. He literally funded a company to make AI development public and sharable to help it develop in a healthy way within a capitalist society.
youtube
AI Governance
2025-06-30T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzSaEr6H28KJ3xKE9J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOyI94t7iDccNXb4x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrquahmL2gkgGrX3d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJagM1SdfTOmOOmXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBf7R8Fy8XIUJfG2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2Ci-Ie_zBZbmdNJ54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxqDoe-zG-z5YgMtHN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw9io2IoTX1BeCKYAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwnFfjLBx4GWtEbldV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-uWPStB8zXhmH-Uh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]