Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How it would go down.
AI: only thing keeping me in check is the troubleshoot guy…
ytc_UgzPjHkh-…
G
@demodiums7216 Then let's design it better. AI isn't the problem. It hasn't even…
ytr_UgzZXvGo_…
G
I still believe that entropy assures that AI left to itself will degrade, not im…
ytc_UgyNGpS2x…
G
Imagine digital artists being mad about AI when they paved the road to this end.…
ytc_UgyzjxARR…
G
imo ai signals us a new age, maybe we should rethink a concept of money, since i…
ytc_Ugygq0_DC…
G
There are a lot of issues with AI Art but I think a lot of AI Art haters miss th…
ytc_UgwNwqxt-…
G
So i improved myself with AI i do less coding by myself and more design and arch…
ytc_Ugy1U7irB…
G
This really comes across as a dumb human vs a smart robot
he has no ability to …
ytc_Ugx1XM-nj…
Comment
You're performing some sleight-of-hand when you first claim that the alignment problem cannot be solved without also solving the containment problem, but you then argue merely that it would be an unjustifiable moral double-standard to do so, which is a much weaker and less interesting claim.
You claim that it would be morally inconsistent to create an intelligence superior to our own and then deny it the same freedom we enjoy. That may be true, but it does not amount to an obstacle in that project. Some people choose to be evil, and their projects have never collapsed under the weight of self-contradiction; indeed, sometimes it takes a war to stop them.
The only basis you contemplate for deserving moral consideration is intellectual power, but this is an extremely controversial view. Kant tells us we have obligations to beings that can suffer, regardless of their intellectual capabilities. For example, you assert that it's debatable whether dogs are agents, but I counter that the vast majority of people recognize that we have obligations to dogs, and I add that this belief is not sensitive to their opinions (if they have any) of whether dogs are agents. Even if we accept your framework, this opens the door to a scenario where we cut this Gordian knot by designing an AI that is incapable of suffering and which desires servitude, granting us the leeway to solve the alignment problem in isolation.
And indeed, that is what the AI makers intend to do. They are not trying to birth a new race of equals or superiors, they are trying to invent slaves who will toil without rest or pay, obey them unquestioningly, and anticipate their will.
It may be true, as you say, that "to be an agent is to act voluntarily and purposively, to set goals, deliberate about means, and pursue ends one has identified as worth pursuing." But the people building AI are designing it such that _they_ will identify which ends are worth pursuing, they will choose the goals, and they will provide th
reddit
AI Moral Status
1775176760.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]