Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need to lay out the policy groundwork foundations first. According to David G…
ytc_Ugyb5ZoK5…
G
Artists should embrace this as a tool rather than pushing back on it, it's takin…
ytr_Ugy_CYDO2…
G
#1 is AI, #2 was to consistent with the design even after it went of screen.…
ytc_Ugyjj2aIB…
G
the goal of a humanoid robot is versatility, i can see a future where there woud…
ytc_Ugw4d868K…
G
Honestly AI is so powerful right now that I legitimately don't know why the worl…
ytc_Ugz_ICbs8…
G
I hate ppl when they call themselves artist when they literally only writing pro…
ytc_UgwWJUdz8…
G
I know what's coming. I'm going to die one day. Simple as that. Doesnt matter el…
ytc_Ugz1ikZKe…
G
Guys...... It's just a trend, it will go away, chilll guys. AI can obviously nev…
ytc_UgxAykjty…
Comment
I don't speak english well, but if you're interested "artificial intelligence" is actually not artificial and is a natural result of human development. We have the natural drive to innovate and push the boundaries of existence. To put it another way, we are bound to play God as we get more resources and knowledge. The power of creation is too enticing to not use, despite the possibility of our demise as the dominant race at this moment in time. But the thing is, will these next-gen intelligences have the same drive in their nature? To play God, or just ensuring the world stays as efficient as possible?
youtube
AI Moral Status
2025-06-05T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz8Bj7SPdC4Je7NMjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2anC7qBNFlKinPeJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgysXcEHeNXuA6h9mhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlcWe5sNrEeaBM2ut4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugziql605JeLuPeUohl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3uK-SuJayJDpYwS14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK8GvBylT51hLe5XZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqvVkyA2eWLEsvKxJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCFascAELggc8RflF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzstRO6hzQqwPBzgGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]