Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI computers are great for certain things. I had conversations with the AI compu…
ytc_Ugx92ZDVo…
G
Ai will harvest humans for our physical brains. The human brain is the most effi…
ytc_UgyXn9PKM…
G
The first self improving AI would just be a computer virus , duplicate and survi…
ytc_UgyvNt0pJ…
G
I agree. The Devil has disguised himself as your Internet Helper. He participate…
ytc_UgwX6AKzs…
G
Stop...anyone who thinks AI will take over in a few years or only 5 jobs that wi…
ytc_UgxR6Yhcu…
G
You're a force for good, but AI will drown us in creative mediocrity. The workin…
ytc_UgyTkCDhj…
G
it's just like my city councilwoman in Easley South Carolina Pat Webb she is pre…
ytc_UgxBVrrmY…
G
It doesn't make finite resources infinite. More people equals the need for more …
ytc_Ugy7VtzT_…
Comment
You know, the fact that these dimwits don't even know how to communicate with the robot is very telling about humanity. Rather than use "thank you" as a way to complete an idea or complete a thought, he just clumsily functions like a broken robot himself while he persistently interrupts them. He has no manners with the robots and yet expects them to learn decency and morality while seeing virtually no input. I see this potentially being a disaster due to their inability to actually interact with them in a way that is decent, ethical and kind. How do these men end up being the ones doing all the interfacing with these incredible machines? Seriously. You need to limit their exposure to less than capable humans the same way we need to limit pedophiles from having contact with children. These companies need to take responsibility for what they expose these machines to, knowing that they may not get sentience right without limits on input.
youtube
AI Moral Status
2019-11-24T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgziPUNCtSV_W69azXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyedqqEozGrnHJvnuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJ4asmRI50gNgHXlF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXnRfqOXRskIwZVDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzEco81dpE2_X5M1VF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw05TtffUQl5zZJbXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3jFvigLWK8X_YjHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcnhuM6CiwXQUG6lh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzbe5nz6R1Hx46ypXh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5q2eDZo4LOUTP31x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]