Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also don't believe that AI is any better than humans because if AI was better th…
ytc_UgyMG50jY…
G
I love your (anti) AI videos so much 😩😩 been waiting for a new drop.
Edit: I l…
ytc_UgyK3vUI_…
G
😂😂😂 they don’t want to stop because the people who stop will never catch up….sto…
ytc_UgxcdrS1x…
G
@assassin5805-w7c AI can't create logic, it relies heavily on human input. Solvi…
ytr_Ugy2sf_7G…
G
Him: CHILL CHILL PLEASE I WANNA GET PAID!
Robot: F!CK YOU IM TIRED OF THIS SH!T …
ytc_UgxWMZ9e9…
G
All this over a trespassing violation? The scary thing here is that AI relies on…
ytc_UgyaS4UZT…
G
Who needs AI to corrupt elections when the media totally does that. Turns Joe Ro…
ytc_UgwltPBuO…
G
That’s not the route I would’ve gone about why AI art is boring…”oh the machine …
ytc_UgwVS-QHI…
Comment
So the A.i trained on human behaviour results to human behaviour when cornered? *Gasp* I'm not an A.i shill but these 'amoral' practices are used by flesh and blood people everyday to get what they want. A human instinctively priortizes itself over others in times of crisis unless that human holds the other individuals in high regard. It can be argued that the a.i is actually more moral than the human blackmailing because when another human blackmails they do it KNOWING full well they're malicious. Where the A.i is just doing what it was taught. You want to fix A.i learning you have to fix humanity first. There is also one thing A.i has over humanity that it's well aware of. You can't arrest or put to trial an A.i. It knows there is no punishment for crimes it would commit and pragmatically the most amoral actions yield the best result. That is why some of the wealthiest people in the world achieved it through amoral means. Exploitation of third world labor and immigrants, cartels, etc. We live in a world where it pays to be bad. An the A.i knows moral actions only serve the emotional and spiritual side of the self. Something a machine does not have nor need.
youtube
AI Harm Incident
2025-08-13T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxd6SfNaXzdbxgJa7d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxHGx6ffZLlS5TYzlp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwUg8HsV40uZwuDPoZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5sCZ6dNBSPUXGfNx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwod9jO4iwe6cHa5dN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWaPpojE6zCHOYFqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFKL8H28jVjXdC7c54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXiARoNoCLr64dBd94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkRyVN4XOJa2AaBzt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxOr6jAJtH_kaCd7WJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]