Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT: Actually, 10 + 2 is definitely 12. If you're getting 15, maybe you're t…
ytc_Ugx39sD_L…
G
It ism meaningless, art is a feeling it goes far beyond just drawing or painting…
ytr_UgysoB-Rx…
G
I seriously don’t understand what exactly do they want to use A.I. for ? Surely …
ytc_Ugw-Ggcrz…
G
At 27:39 Nick names criteria for AI moral status beyond sentience: "conception o…
ytc_UgzL8MPii…
G
Bald guy is actually pointing out that this channel was being pro-AI lmfao. Mayb…
ytr_UgxzghU6k…
G
AI books need to be disclosed, with permanent prominent provenance. AI itself is…
rdc_lzeun7i
G
Most of those fields won’t need managers as AI will be doing pretty much all off…
ytr_UgzH-oC6_…
G
Large language models are not what I would call real AI. It is just a predictive…
ytc_Ugy6ZWikF…
Comment
Just need to watch until 2:05 to see "apparently just another Artist being salty that AI TRAINS on people's work* to imitate styles, yet they use snippets of others peoples work themselves" without any changes even (so literally stealing it).
ngl, that's hypocritical but also funny af
*Sure, we can debate if that should be legal or not in the first place... but at least AI doesn't take things and spits them out 1:1 with no edits, unlike people who steal other peoples work all the time and call it memes to justify their stealing as acceptable behaviour.
Not trying to justify either, but if you want to complain about computers doing things, try to stop people doing it too, since it's people using the computers anyway.
youtube
Viral AI Reaction
2025-04-25T11:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxHu23Nr1rELPjDxtV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJy3w65E9jhOjypwF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsuqNvTRc9i5ajbpt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugy1qa2_spE9a2zvwZ94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwB7yjLEd_FZp6XQ8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw3iBaLuEgVtv1kUsp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzg6nrHLFZsfLwsMc94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbIAPEVCTTUZ3YVol4AaABAg","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwPffksuzTHxgKQe0l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugy2Urtq_HtnTonhlq14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}]