Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Disabled artist here! And I wanna say ai is BAD. Art is an accessible medium for…
ytc_UgwZ5XFfw…
G
LOL! What fiction. They were afraid that the stock price would go down on their …
ytc_UgxnodQhT…
G
There's no competition to those predictions, but there's a lot of competition in…
ytr_UgyXl16Ja…
G
Y’all dense af if u hate AI and wanna make art for ur own joy and love just go d…
ytc_Ugxx9rIpq…
G
I made AGi with a 1.5 billion parameter model with RAG, it’s about the weights. …
ytc_UgyFggYdE…
G
I do draw IRL but I'm also using AI for my entertainment. And I'm sorry for usin…
ytc_UgxKXMjwb…
G
Why would a robot need to aim down sights? That's a human handicap due to how ou…
ytc_UgxHptsLF…
G
Oh i dig that usage of 'Vomiteer' in relation to this Ai slop, we gotta use that…
ytr_Ugx46K0QQ…
Comment
i don’t understand why the AI would want to take over or destabilize human society. we can kinda see that people generally get better at making moral judgements and listening to (and controlling) their empathy. would it not track that a general super-intelligent agent would be ethical and moral in the extreme? better yet, if it develops a consistent and comprehensive moral code that differs from ours, wouldnt it be more likely that the bot is right, not us?
youtube
AI Governance
2025-09-05T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzlaNJXNgl9y1J1k3p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy8m0V3FXM1DHMrwAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEv4wQfxwYvmPL79V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-yx3VnzORfojwzcp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbdVIHUkzpPHJMKkl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyiAXRF4th57ycft954AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw7J_NiGVanegZI5s14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgzukDpM8gsDFEH2YIt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwVJuGQ12skD0SrjQl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzK-venw54vX2TG9l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]