Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Folks (somewhat) joke that the US is a plutocracy, but Korea *really* seems like…
rdc_lj95cap
G
Its not hard to imagine the more sinister possibilities of AI imitating or overr…
ytc_UgyhKWmue…
G
Ok, but, when A.I. makes human workers irrelevant, then with humans not working,…
ytc_UgwiokYXz…
G
What really bugs me about the medical industry is that they do not want people g…
ytc_Ugzy0_rXd…
G
If you think a studio executive with an AI is gonna produce something better tha…
rdc_jtc7lc8
G
At the company I work at we have to go to a LOT of schools every week, and when …
ytc_UgyyX6_Ra…
G
If the money comes from tax dollars, it is basically forcing richer people to sh…
ytc_Ugz75WJ7Q…
G
I asked chatgpt 5 and it said it would kill 1 human to save 1000 ants...…
ytc_UgzJ2u4B-…
Comment
The poisoning of AI art should always work in some way, because as long as they try to train any AI at scale using pictures online, there is a way for any number of people to embed poisoned data into that dataset. It's impossible to filter out easily, cause of the way they steal work to train their models en-mass. It's by design impossible to inoculate against without constantly altering how their models are trained, which then can be adapted to by poisoning algorithms. It's always worth poisoning any image that is uploaded anywhere!
youtube
Viral AI Reaction
2025-04-01T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyylg7QNGNBqprGRPN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzKGCg2nSGq2lwgfHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9Hrv00Yiqu6Fj_354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg9m_0ya_EY6zfS7F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZpMFQsXp20qNb8wh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrvlPl296OorXwhuV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxN-I_twFH2AyorH1Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwphjZfTvyPCCtuH0F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyl-qcPnD_-viyxjkd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtFUyRbXm4POl-g0h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}
]