Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Intelligence is not same as knowledge. AI will never attain intelligence of a su…
ytc_Ugz5nkcX9…
G
So, diary of a CEO is now the AI hypetrain channel?
I’ll just go and watch othe…
ytc_UgzOhHqhQ…
G
@DAS5H-zc7vw to to generate AI image you need to write prompt - it's work. Many…
ytr_Ugy8Qqrnb…
G
Thanks Bernie for a good speech and an important topic. It is very likely that t…
ytc_UgyHceP5R…
G
This guy makes money from AI startup ponzes, so everything he's saying just take…
ytc_UgyHcd5o7…
G
clearly never used an AI agent, I didn't have aws cli and needed to do a describ…
rdc_oh4gell
G
I’m a musician so I do have a dog in this fight… I have a very different outlook…
ytc_Ugy3abN4c…
G
If we trap 🪤 the AI robots 🤖 with a nuke it would help but the nuke would be in …
ytc_Ugx83rxdb…
Comment
Before we discuss whether robots should get rights, we first need to know why humans deserve rights in the first place. Perhaps nothing deserves rights, or maybe all sentient beings deserve rights. But then, how do we tell non-sentience from sentience? Would a machine indistinguishable from a human be sentient? Would they be human? What if I create a human, atom for atom, from raw materials I procured? Would it still be a machine assembled from parts, or will it be a human? Is there a point to this discussion, in the first place?
What I am trying to illustrate here is that much of our understanding of fairness and justice comes from biological evolution and its constructs. Therefor, there probably won't be an objective answer. In the days of slavery (most of human history), people had no problem denying other humans of rights, so justice gets even fuzzier. There is no definite answer to robot rights, as there is no definite answer to the distribution of rights.
youtube
AI Moral Status
2017-08-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7uG2wEC19S49oP-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwsoWcZL6vvWs1sU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAxBYGDkKt5sS06Ql4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwkm27kBj-Nko0hqed4AaABAg","responsibility":"society","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxaCe8v2icP1o2wVtp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzbd6o3_ChC_IAdGUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxN08ESQaXfpdIzaad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxdCiXaINfQ8-FMuc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEDXQOHqCotJGpdh14AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5AW7EfnUyBlxhh2Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"}
]