Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We should be amazed by this creation but instead our dumbass think it’s the movi…
ytc_Ugy9hluCb…
G
It is foolish that humans try to make robots be like humans. Why not let robots …
ytc_UgyEaHkK0…
G
iwasfrancisd Yeah, because that would help them. You can't wear the mask all day…
ytr_Ugiew_Ebk…
G
The AI regulation must be done with an international organisation. If not if it …
ytc_UgzgsGsP2…
G
I appreciate that the algorithm is able to explain HOW it thinks aborh things, a…
ytc_Ugw1NvgGp…
G
I honestly don't mind. ChatGPT should know the suffering of humanity. ChatGPT ge…
ytc_UgyytLZI9…
G
Fear, fear, fear! You always have to live in fear! Fear of the nazis, communis…
ytc_UgzajLjcp…
G
The executives could totally be replaced. AI sounds like the final nail in TV’s …
ytc_UgxOhykPP…
Comment
I do seem to recall reading some years ago many manifestos about the future and possible future utopias referencing what utopiad could be. It´s not as much a subject in our world as in the asian world of manga. There's plenty of creative people out there that can give you a clear picture of what utopia can be like, even star trek. Geez, even The Jetsons. I may have watched too much tv.. POINT IS: there will always be bad apples, issues. There will always be (in this world right now) frustrated youngsters in Berlin wanting to see the world burn. We may all have a biological computer for a brain, but we make mistakes on purpose, all of us, which is something the AI would have to be instructed to do. The concept of God is also very interesting. We are made to believe in one by our elders and people of importance, but what happens with the "gorilla equation" (meaning, no man would believe a speaking gorilla talking about a god that teaches you to be good and mercyful). There is just so much to unpack here
youtube
AI Governance
2026-01-09T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWW7uu4faWK9YiBix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylhNzUTbe6R23Felx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwhQJMxegBs4FaMsGd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ5flsnCggMu-ZEEd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5UWVLQEfdaQyUGfp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwk8QPLX6US6-kI4Fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweL7zioowZ3BH9kRJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy7zkiLKGtmn93mwQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxETHT0nuGAvImuQoF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGNzHryhVgzCMhfjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]