Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Trojanstudios-y3dI here your point and I understand, however this is what I hav…
ytr_UgyCoUpUH…
G
I know exactly what will happen, it's all written out in the bible, Revelation c…
ytc_UgwrXQYLy…
G
LLMs contributing to productivity is still up on the air, IMO. I still think it'…
rdc_n7u6q8o
G
@RewindOGTeeHeewow, I guess I consider "abstract" meaningful now. At no point di…
ytr_Ugz-bLGER…
G
What the heck are you going to ask the public - who supposedly elected a demente…
ytc_UgyLKXRq9…
G
Ai can't think at a human level. It can't completely follow a theme, create an e…
ytc_UgzUu3OMV…
G
Chatgpt is the worst AI ever. Giving me the same kind of response every time…
ytc_UgwxxAo4P…
G
Why not look at new opportunities that will be created from this boom?
Blamin…
ytc_UgzUje5Ay…
Comment
Generally I don't think companies should be held responsible for people being stupid. It's just when one can start seeing a trend and they still have not acted they should be at fault.
That being said, I think AI already has a trend and they just fix each single issue instead of the fundimental one. Where most AI models will often be wrong, and all AI models will occasionally be wrong, the companies should be responsible to inform their users this is the case. Which is something I've never seen them do unless certain keywords are triggered.
youtube
AI Harm Incident
2025-11-28T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwzF9zeZ06MzGAwk2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxU-_9auTarJyXTjXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1bf7r9wcDPUdHz3l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwslIcni6KMOtM3v3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdF9lGGKSwKB4v_Kh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxfiYEqf2_b_2QxdKx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuxSjCKl-aGtGGSfF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3Pi4FxBsxJhhLs8Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwul8TcYLbk1-zVyV14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyhRNyKSG8tULfGYG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]