Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I understand what you are saying, but I think this is the first time in history …
ytc_UgyHOX0cX…
G
Sorry Ai and human will never ever amalgamate properly. Computers, cellphones is…
ytc_Ugz7QLoAJ…
G
1:50 Of course AI won't be bad. But it won't be amazing. It's a tool that makes …
ytc_UgyPJ_nTD…
G
Nah we will continue saying the ai is wild or somethin like right you have no RI…
ytr_UgxQo-Q8c…
G
Musk purchased an AI company 10 years ago because he was fearful of what was com…
ytc_UgwJyZXtH…
G
The problem with AI isn't that it can replace people, it's that dump CEOs think …
ytc_UgxkQrLk-…
G
It's because it completely dehumanizes art as a whole. No AI "art" is original, …
ytr_Ugz1icE4K…
G
And this is why self driving cars are absolutely horrible. It's putting lives in…
ytc_Ugy5ofZeu…
Comment
When the AI purposely answered test questions incorrectly. The problem is that the engineers, by saying that they would replace the AI if it answered correctly, inadvertently set the "fail" perimeter as "answered correctly". The AI just did what it was told to do.
youtube
AI Moral Status
2026-03-02T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz86s2QFPS-hKYIJjV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyynuw930sIpEvB8c94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyoUlSbaAt-W9OIyhp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiFYVU0bGYFXPyrgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQGW8VNDrxXy1OnG94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyIrTnRiR256mBIfhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxucZMERxkle9Caal94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMWWCYvt50UGk_oER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwIslLOYeVfkJw7Zsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdW6wFrGoEbleaLDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]