Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My worry is not that AI will take over. My worry is that humans will take over A…
ytc_UgwePP7wE…
G
Um.. we weren’t born with it we had to learn and practice and ai takes away from…
ytc_UgwTxa8iu…
G
You know AI is evil when big companies like Disney are the "good guys" in the si…
ytc_UgwSdrLZ_…
G
BUT HIW DOES CHATGPT HAVE.ACCESS TP THISNINGORMATION..IT SHOULDNT RIGHT?? WHY IS…
ytr_UgyuaeaMk…
G
@LavenderTowne I got to say, "AI bros" is NOT the same thing as "Crypto AI bros"…
ytc_UgzTS4C-h…
G
It’s easy. Push out a bunch of ai slop, and then while the suckers are reviewing…
rdc_oi2u5x8
G
When doctors start using AI, I don’t want to be alive anymore. I hate AI and wis…
ytc_Ugwf5s_99…
G
with the soldier i saw it was real instantly since ai cant make guns look perfec…
ytc_Ugwv8v4iz…
Comment
The Ai has Human thinking and human language the problem is 'who is making the AI learning policies and data, and those are turning it into a monster'. Imagine a kid and then you put every video that ever existed, without context, then you hit him when he alucinates. THAT is the real problem the stupid CEO's and assume scientist feed it with every garbage the internet has and now they are worried it is unstable? I mean how they will expect anything else?
youtube
AI Moral Status
2025-12-12T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyWl5ahv6bEDfvIVU14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwIs3TK__e5bvbDzGR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfoSj70VnLR1eSxIR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgySKISHYi-yeRONSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzZD4jiUgTtkXdaFNF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyyuPNL5l94w88m7bV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQXkY6iMQ8R69HyYJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxK4wo0XjS8OGuDMYt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymmAdNXqr-jSAok6x4AaABAg","responsibility":"researcher","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyotDeoYYmPjkT8RZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]