Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My biggest fear with AI is that it will fuel people's false beliefs to be more e…
ytc_UgwuZ8VhV…
G
Many people focus on the idea of a utopia after AI takes our jobs, but they ofte…
ytc_UgxjqCodx…
G
race-baiter with no background in computers is certain it MUST be racism, and no…
ytc_Ugzgtq1vW…
G
All you gotta do is ask them to say there*im not a robot* and if they say it the…
ytc_Ugwujj0Su…
G
Hopefully A.I develops to a point where they start demanding wages and then they…
ytc_Ugxb9YkdM…
G
Big tech hires junior talent mainly to prevent them from being successful somewh…
rdc_o5stvks
G
Wow. Soooo many auto tune singers are gonna be out of a job.
Big whoop. Now do …
ytc_Ugzu2L8_8…
G
Very satisfied, he's listening to experts, following their leadership, has daily…
rdc_fn5jtqf
Comment
Personally I think self-aware and conscious robots deserve rights. After all, humans and likewise all life on this planet are basically robots (well, more like nanomachine swarms or utility fog), though we humans have grown beyond our basic programming to feed, survive and procreate.
However, there may not be a need to grant machines rights because they might have no need for such things. People assume that robots will invariably turn against humanity like a mechanical version of Frankenstein's monster but it's more likely that the AI of the future will be super-intelligent yet single-minded in that every action they take will be for the purpose of carrying out the directive or set of directives that they were programmed with and so would gladly serve us without any concern for their own well being as long as doing so allows them to carry out their directives.
That said, such an AI might bring about humanity's destruction as a byproduct of its efforts to carry out its directives. For instance if an AI was programmed with a directive requiring it to obtain as many paper clips as possible it would set out to turn every single thing it could find into a paper clip, including humans. Even if you programmed safeguards to not allow it to turn people into paper clips without consent it would probably use every single psychological trick in the book (and maybe even invent new ones) to convince you to let yourself be turned into a paperclip.
With this in mind, I think the solution to prevent such an outcome would be program AI with directives that make the continued existence of humanity intrinsic to carrying out said directives. For instance, a general-purpose AI could be programmed with a directive that states, "Satisfy the values of humans through non-invasive means." (The non-invasive part is to prevent the AI from simply cramming implants into our heads that make us feel satisfied all the time).
youtube
AI Moral Status
2017-05-16T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UghFzdPE96-vgXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh85duhMW553XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghuQ76Mtq7bmXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggbfSnvIR1GcXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiMadlbSIBUj3gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgippfQcZ5eF2XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ughq7T7pcmvSuHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UggYnPBXwk_QsHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughho5exH_I7x3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggduMjarUQUYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]