Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've actually TRIED for weeks to have scripts made as well as scripts that allow…
ytc_UgxTspxgy…
G
cant u just take a photo of the AI art and then import it into ibis paint, and t…
ytc_Ugy4Jnyno…
G
Peak waffling here. No, AI learning and artist learning are NOT the same thing. …
ytr_Ugx3b7V4O…
G
This video makes a hell of a lot of assumptions. If the majority of the people a…
ytc_UgwtC6I00…
G
10 years before I thought what will ai do
But now we all are live witness of wh…
ytc_UgzG9gWtj…
G
AI should be useful to us, without being invasive. Consent. We would have to unp…
ytc_Ugy4jTa9D…
G
et elle ne pourra jamais être à la hauteur de la nature et de ses créations, je …
ytr_Ugz2YCVlT…
G
Dude, bro... bro, dude, that's some scary stuff there.. I didn't know. I'm surpr…
ytc_UgxvVRlwC…
Comment
The idea of granting robot rights is completely at our hands and our choice. We are the ones who created robots and we are the ones who continue to improve the intelligence of robots while well aware of the possibility of sentience, so I think a good answer is it all depends on what you want. If you don't want a world where robots have rights, you don't gotta have one. It is our choice to make a robot that is sentient enough to demand rights so it is also our choice to avoid that and simply make robots very intelligent but not to the point of sentience and freedom. We can easily make robots who act human but only within the limits of their programming. It is our choice to make robots who act human because they are not withheld by the limits of their programming and, like a human brain, they are expanding their own programming independently without the aid of humans which allows them to have sentience. So it is our choice to make sentient robots. If robots become sentient and kill us all, that would be our faults. We did not have to make those robots sentient, but we chose to program them to be sentient and therefore kill us.
youtube
AI Moral Status
2017-04-17T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghYexzMOt3HZHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi0UdVbvS94CXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghXMjd6iMIlc3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghveoVOf9sGxHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjmPVGmp27jk3gCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugit6t1GkeUGMngCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiD3MXHTAvZB3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgjZZuoWAcawn3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiLNSy2wGiwwngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghFQ5fZR_jhr3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]