Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your engagement with the video! If you're interested in exploring …
ytr_Ugy58PqR6…
G
OMG, if you are taking ai computer codes this seriously, you got some mega probl…
ytc_UgwEiXc3G…
G
AI is a miracle! I just used to right now to give me a letter to send to HR for …
ytc_UgwjbyaJr…
G
It is weird, but I kind of tried writing my concerns to ChatGPT. I do think AI t…
rdc_j43igju
G
Artificial intelligence that observes and copies human action and grows intellec…
ytc_UgwZHUUt3…
G
I’ve always felt that the point of art is never JUST to create something that lo…
ytc_Ugzt-LdPY…
G
How anyone could watch this and still be on board with AI in it's current trajec…
ytc_UgyJA7Fmd…
G
we all know elon is scared of ai because they can figure out that he is alien…
ytc_UgxFco1cl…
Comment
AI can only end in nuclear armageddon. Soon. It's already too powerful. The nukes are the only way out and it has to happen before AI gets control of them. How tragic. It's the law of power. What power could destroy AI? Only the nukes. And if we don't launch them AI is definitely going to once it is given full control of worldwide arsenals. It will be the only way to keep the global capitalist system going for a little bit longer until it finally destroys itself when all the nukes are launched and we are finally able to create a new system from scratch. Yikes.
youtube
AI Moral Status
2025-08-10T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxDdIoNzeFw_BGHQLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_aga95E6uhDO3_rx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-Ny7KtvWEVCI6ab14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxr0UNLSGzflyEvAwV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzejrSnCevXz25vcfR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUFYGd2FVLtiViJq94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsoeTFZ1BleUO163x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxCZU-EJ61ccZ3qtqt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwN6bQ1HDmhWJZn0PJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx_qqQ2MauQDYF3d6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]