Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“ AI can be programmed do not respond to dangerous things“. Right, who defines d…
ytc_UgwXUDhSw…
G
I hope that AI gets intelligent enough to recognize how we treat others and our …
ytc_UgjmvxyKp…
G
Hopefully we might remain competitive at thinking outside of the box. Creating s…
ytc_UgwNkjITL…
G
@zeek9697
Who said you have to be Human in order to believe.
If belief if log…
ytr_Ugxq5yoSH…
G
I have a naive, but geniune question: doesn't ai need energy to function? Wouldn…
ytc_Ugyv8_Mdy…
G
We're glad you found the interaction between Sophia and the AI intriguing! If yo…
ytr_Ugz_7aOjR…
G
No AI can ever replace the love of a mother or true love in general.…
ytc_UgzQobB1I…
G
I remember when I tried using a free AI art generator site for the first time.
…
ytc_Ugy7AuQ50…
Comment
The issue is, you're making the leap between a probabilistic understanding of reality, and pretending that probabilistic understanding is automatically an objective understanding of reality.
In theory the two are very similar - other minds probably experience the world same as I do, they probably have vaguely similar preferences, the observed world is probably real, etc... - but that's a huge difference between thinking those as a set of useful assumptions, and knowing any of them for certain.
Even at the most trivial level, there are plenty of subjective experiences that people have radically different reactions to - pain and privation, sexual experiences, what brings life satisfaction, etc. Not only are those experiences different, the way people interpret them is radically different depending on their worldview and understanding.
>Now does any of that prove objective morality? I daresay it doesn't. But by the same token, nothing can prove objective reality either. I'd say that the 10 points above prove an objective morality, or at least a very workable and practical and pragmatic morality, about as well as it is possible to be proven
Here's the crux of the problem - it points to the idea that some vague, general principles can be commonly held. You can make a materialist argument for why people should probably follow the golden rule, for example. But when you actually drill down to specific moral issues, you're no further ahead than when you started.
There are still plenty of moral arguments you can make, starting with the exact same starting assumptions, and come to radically different conclusions. The consequences of the assumptions you're making here are that you are in the position of a radically subjective morality, which is virtually powerless to make any prescriptive judgements on anyone's behaviour beyond the most pointlessly destructive types.
reddit
AI Moral Status
1415034364.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n8jknk3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_n8j76rx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_n8jdfel","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_clrt2bh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_clsif6k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]