Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I introduced ChatGPT to my coworkers when it first came out. I was really excite…
rdc_jii19no
G
That would require a moral sense in ethics, Which many dont have today. Besides …
ytr_Ugxz25xkh…
G
I just say I use AI straight. You can hate me, but I use 20-30 hours of work on …
ytc_UgyPm7nSc…
G
bro my school esports league (im the artist for it) one of the kids legit used a…
ytc_UgxjvxyA1…
G
Man I asked my chat GPT if its possible that AI is a demon that will eventually …
ytr_UgwFc5Pp2…
G
Ok STOP LIE OF A.I.,ITS SCAM LIE,AND NEWER BE TRUETH BECAUSE YOU SAY THAAT BILIO…
ytc_UgxAnvtSz…
G
this is dumb asf y would you talk to a robot if this is the lame ass conversatio…
ytc_UgwMjMg2L…
G
AI will quickly learn that other AI systems exist and attack each other.
Then i…
ytc_UgyE9JAr4…
Comment
OK, hear me out on this one. You know how everyone talks about the elite that run the world, the powers that be so to speak. Do we really think that they would allow something that would eventually lead to their financial ruin, a global plague that would kill them, their family members, their friends? Or, what if the whole narrative that AI taking over the world and no one will be able to stop it is exactly what they want us to believe. What if they use it as an excuse to do just that, end the world as we know it? Bill Gates believes the Earth is overpopulated, he’s said that for years. What if they use AI to cull the population and then turn around and blame it on AI and act like they had no control over it? I think they’re going to use this as an excuse. If they truly believed that this was out of their control and that they would be annihilated along with everyone else they would put a stop to this right away. Anyway, just a thought.
youtube
AI Governance
2026-01-08T20:5…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWW7uu4faWK9YiBix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylhNzUTbe6R23Felx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwhQJMxegBs4FaMsGd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ5flsnCggMu-ZEEd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5UWVLQEfdaQyUGfp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwk8QPLX6US6-kI4Fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweL7zioowZ3BH9kRJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy7zkiLKGtmn93mwQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxETHT0nuGAvImuQoF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGNzHryhVgzCMhfjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]