Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it depends a lot on how the AI is trained, in stable diffusion the AI decomp…
ytc_UgwvSEGW_…
G
You're absolutely right! "Σοφία" beautifully captures the essence of wisdom in G…
ytr_UgzQdfwdR…
G
None of the truck drivers shown in this video look anything like the truck drive…
ytc_Ugyz1qfjF…
G
Amazing ignorance. Just amazing. All AI is doing is increasing the velocity that…
ytc_UgzUt7r9B…
G
Ok i already see a problem back then and now today. The one thing restricting ai…
ytc_Ugw5C1PKZ…
G
The most annoying that about this is that AI isn't even good. Google is adding a…
ytc_UgymNbInx…
G
If AI will do want it needs to accomplish its goal, the answer is simple. Just c…
ytc_UgxiyJcdV…
G
This thesis argues that such assumptions are anthropocentric: they project human…
ytc_Ugw48YI08…
Comment
Thing is, by the fifth law of logic (if i recall right, the principle of sufficient reason), it's impossible for a being to create another being that's at a similar or higher level of intelligence (in an artificial way). It may be possible through genetic manipulation, but then the result wouldn't be a robotic being neither would have been an artificial creature, as it would've been "born" somehow and would be alive.
So even when Matrix, I Robot and Westworld present interesting concepts, they're ultimately futile. It's just science fiction, so I think we should just abandon the topic and focus on human rights, because it seems a lot of people lack their own. First fix our current world, then try to fix impossible imaginary future ones.
youtube
AI Moral Status
2017-02-23T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi3l4d6_ZVSPngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UggXxUS6ImDcVngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UggiiBkWN73X1ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugg5MrhnXcA4ZHgCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggRPiq5dwY9P3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg2foK25E_ACHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjaRkRwKWzpoHgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg8QyIAn2PW43gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj4vFwy4jRFsngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjrvLoOOdSbhngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]