Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With how advanced AI art’s getting, having an AI image detector just makes sense…
ytc_UgxXrdZUO…
G
AI could never beat true art because it doesn't capture the emotions of the arti…
ytc_UgxhOCFVC…
G
AI is the newest form of brainrot(The others are stupid famous people, Cocomelon…
ytc_UgyGLhK4F…
G
Thank you for celebrating with us! We appreciate your support on AITube. Remembe…
ytr_UgwtfVKhH…
G
This interview is a formal admission from the system's human analysts that they…
ytc_Ugzmmq1hi…
G
@AceofDragongamesReally?! It shoukd be fairly easy for the algorithm to remove …
ytr_UgyaMLFuP…
G
How does one mental break an ai? Honesty what you did to Tifa is impressive…
ytc_UgxBuhfys…
G
Another genius callout, Kat, lol...this has LONG been one of the most famous AI …
ytc_Ugz3BHcOd…
Comment
I really think focusing AI regulation on AGI is a pointless distraction that obscures the way that AI consistently is used in harmful ways already. Given that "intelligence" isn't really a quantitatively measurable thing (not in its entirety and not with any accuracy) AGI is already relegated to being a buzzword rather than an actual standard anything can be compared to. Meanwhile LLMs are being sold as an alternative to human workers and Sora is making misinformation more prevalent. The people who profit from this harm are a very small group and many already have lifetimes worth of money. it's frankly stupid to be talking about AGI like a) it'll probably exist and b) it's a relevant issue right now. There are real, non-sci-fi issues with the industry that can be adressed.
youtube
AI Moral Status
2025-11-01T02:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5nzhpBpXHtDITV6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwhy-_ektzjYrwZg3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY81eIZ9Ht6vm_l8d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzh2fxLGfLTzk2nmJl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2zUO-efpUZtWy4Ex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOi8Sl6ZGRdkwZpyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlMGwP678Uvk4uTwt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEDAY-BLwPAV980N14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxUevvTjVxa5Bhhw3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF8QubCPPM10BS66h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]