Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Text generator AI's didn't replace book authors and a code generator AI's won't …
ytc_Ugx-o5v5K…
G
CLICK BAIT! CLICK BAIT!! CLICK BAIT!! AI IS A BIG JOKE! IF YOU USE AI YOU ARE…
ytc_UgzEmoxU6…
G
May I ask why you work overtime every single day without being compensated for i…
rdc_hmvj60d
G
Both and also it’s stupid, you type in a prompt, and you get art from the ai, yo…
ytc_UgzwWEGQu…
G
Inspiration can not and should not be copywritten.
Inspiration can be a powerfu…
ytc_UgxBoJnKm…
G
IF UR AGAINST AI ( and if u want 2) CHANGE UR PROFILE PICTURE YELLOW 2 SHOW IT!!…
ytc_UgyabxtLG…
G
What are you talking about. The companies who control the industry love ai, it c…
ytr_UgzUSr2-0…
G
Hey so follow-up question.
How could an attorney structure it so a settlement …
rdc_djr1m08
Comment
If people do not take reasonable care, and a person is hurt or killed, the person who did not take that reasonable care can go to jail. That happens all the time. Left an unsecured gun out and a kid gets killed? You go to jail. If Sam Altman and the rest of these people faced the prospect of dying in prison if they build an AI and give it access to the controls necessary to operationalize killing people, they wouldn’t be talking in the cavalier way they are and they wouldn’t be at all tempted to roll the dice with a 10% chance people will be killed. AI should not be given control over systems, it should be able to recommend only and people implement the actions if they make sense. In no situation at all should a computer be given control over the nuclear button or the ability to control the power grid, or deliberately destroy crops, etc. If a company does that, it should be a crime that the people involved go to jail for.
youtube
AI Governance
2025-08-30T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwB7V4CYxOmBTfx7sl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzu33lIKGJ07SD1BIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxhOjRpNDYmovr1TsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMs7Rxh1zab6V6Uvx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzMXinURlHf8LYW0X14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]