Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bushman feels discouraged every time he looks at students who pass a hard subjec…
ytc_UgygdlZ9j…
G
OpenWorm shows that we can simulate neuron connection maps. Brain simulation is…
ytr_Ugx7HbBXe…
G
Don't blame this on AI. It's a people issue in this case. An idiot with power ma…
ytc_UgynMsTtj…
G
The Media has influences the masses when they put out all the propaganda about C…
ytc_UgzvhV4Sg…
G
stopped the video after 4 minutes....idc
if anyone say ai is not beneficial the…
ytc_Ugxh3juXU…
G
If the rise of an all-powerful artificial intelligence is inevitable, well it st…
ytc_UgyiOw8-x…
G
So... If you could ask the AI why they did what they did in the picture you'd be…
ytc_UgwkuzNqB…
G
AI artins was still under fire while open devinatarr and all you can see is yiff…
ytc_Ugzg4CJZo…
Comment
So disgusting how underaged users are actually using both ChatGPT and Grok to cheat on school exams, erase screen time on their electronics devices, have it write out dark stories, etc., like seriously! ChatGPT now has very strict guardrails to prevent all of that—but kids are now actually resorting to *Grok* just to EVADE the filters of ChatGPT. I honestly think they should impose the strict filters on Grok as well, as a matter of fact to the point that any content deemed inappropriate (and similar content detected by the backend AI system) would be utterly blocked out with the automated “Sorry, I can’t help you.” response. Simply and totally.
youtube
AI Moral Status
2025-10-31T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzGDNZq-u-DTgUKNvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxc9Z6iROFCqV7oQmh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugye8x9lteq2m2gvfG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0rY3yykf6sRZ-iq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKFq4wXr-g7Iy9u-54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwBVVXffoTXNzzGt-94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAfabQlD4rux8X7ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1auMf-pm1SfayP094AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzAPk40ttc0CttxORh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzH3SMOAEVGKgpiMSx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]