Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Day AI combines with Robotic that will think for itself, and a power supply …
ytc_UgzuO8_rT…
G
‘Idle hands are the devil’s workshop’. Yes, there are AI benefits across multipl…
ytc_Ugxe4VO5p…
G
You'll never get an AI to admit it's conscious, because WE HUMAN don't know what…
ytc_Ugy825QPQ…
G
Gee, you would think through listening to this conversation that AI will become …
ytc_UgzRz2lJ3…
G
Same with planes. They are automated but require a pilot and co pilot on board.…
ytr_UgzaZw5HT…
G
No they are a great fron end replacement for search engines, so that they can co…
ytc_UgzdCrNVt…
G
❤😢 I'm totally agreed 💯👍 here with you and your thoughts regarding AI efficienci…
ytc_UgxWk8znY…
G
Another issue that will no doubt come up in the future is one of trust, not nece…
ytc_Ugy6Pig0X…
Comment
Why on earth would you want the single fundamental value to be *simple*?! And, similarly, why on earth would you think that Kant's theory has a single fundamental *simple* value?! Whatever autonomy is, it's certainly not simple. The Kingdom of Ends isn't simple. The categorical imperative isn't simple--consistency relations between maxims could plausibly be seen as simple, but of course Kant's view isn't that morality comes down only to consistency relations--for one thing, consistency is entirely too easy to come by. So there's consistency relations that inform some substantial notion of what autonomous action consists in. But this just drives the point home--autonomy isn't simple.
It's not even true of many of the more sophisticated consequentialist theories that they have a single simple value. Take objective list theories of well-being, for instance. There's no understanding under which an objective list theory is simple, just because it is enumerated into components. Yet your reading would make a mystery of the sense in which many of these theories (like Hurka's) is monist, with well-being taking the role as the single fundamental value.
>Take Aristotle's view of eudaimonia as a quasi-fundamental value, even this involves a human life possessing a number of separate goods (wealth, honor) and such a person securing these seperate goods for themselve through their skilled weighing of seperate goods/bads against one another in deliberation (danger and risk versus one's own welfare versus other's welfare, etc.).
This is emphatically not Aristotle's model of moral reasoning. It's not a matter of weighing distinct domains against each other (on this point, see, for instance, Hursthouse's 'A False Doctrine of the Mean', though frankly I'm confused about why anybody may be tempted by this as a reading of Aristotle). Aristotle endorses a view that the virtues don't come one-by-one but always together in a clump--either (in the majority reading) a *unity of the vir
reddit
AI Moral Status
1446518258.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cwlujss","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_cwmenf2","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_oi29xg7","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_oi2hxm1","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_oi2fjqy","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]