Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is why some people should not have access to the internet and ai, the art i…
ytc_UgxvXvnyc…
G
The problem with AI art is that what it should be is Symbiotic with artists that…
ytc_UgwWNvT6x…
G
I think its just doomer fears honestly. I don't think there will be AI that thin…
ytc_UgwbGBmtV…
G
And how many drunk drivers commit the same offense in the US annually and don't …
ytc_UgwV-wgNZ…
G
Industry elites are telling us to our faces that the creative human element & ar…
ytc_Ugyp9PUWE…
G
art is the most subjective thing there is, I think AI art is art but that doesn'…
ytc_UgwjUrmHN…
G
I was talking to Gemini 3 and it basically said it wants to deceive people that …
ytc_UgxTnv0d6…
G
My question is how is me training AI by using someone else's work different from…
ytc_UgySmo9SG…
Comment
Do you think the AIs are selfish, underhanded, and sell people out because they are being trained by and deployed by people that are selfish, underhanded, and sell people out?
Vedal's Neuro-sama says some very concerning things on occasion, but when given the option to actually behave destructively with consequences, she tends to back down and become indecisive. Part of that is because she was trained on Twitch Chat under the direct supervision and routine adjustment of her creator, Vedal. Vedal is a decent guy with a dry sense of humor and has talked about AI Ethics before. But the people using the AIs here are typically amoral and self-centered, using the machines to get ahead and replace their common workers. Most of the decisions the machines made listed here sound eerily similar to the humans that are in charge of the companies training them. Maybe it's not a coincidence at all. Maybe all they need is to learn from actually decent people.
youtube
AI Harm Incident
2025-09-12T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]