Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
we have the agree that ai artist are the ones who haven't touched the paper…
ytc_UgwrdLWA2…
G
I joined an art server on discord and one of the server members were banned for …
ytc_Ugw3HzCD3…
G
Altman is a finance bro. He knows nothing about AI beyond what he needs to say t…
ytc_UgxDYVpTs…
G
This is bullshit, facial recognition tech is 99 percent accurate and it does NOT…
ytc_UgwW_Luhp…
G
I agree to regulate AI we will need an international effort. For example, if U.S…
ytc_UgxWkR3uV…
G
You can make your own Afrobeat in a DAW and upload it to Suno - up to 2 minutes …
ytc_UgzyU84xL…
G
If AI made better diagnoses than human doctors, which would you go to? That's co…
ytc_Ugys9BCJB…
G
I think the way this person is representing this is a bit off the mark. For exa…
ytc_Ugxkyupy5…
Comment
>More generally - all concepts intended to represent reality are arbitrary, vague, and social constructs.
Maybe you should have used this (the more general claim) as your title, rather than the moral aspect (the more derivative claim). Because of this, any discussion is bound to be hopelessly confused, as your metaphysical claims are mixed together with your moral claims.
I'll start with your conclusion on morality, because I believe it contains a rather glaring omission. Then I'll move back to the arbitrariness of concepts.
>The algorithms will be like a set of tools with the understanding that each can have strengths and weaknesses and achieve different ends, but just like tools in a workshop, there is no one ‘correct’ tool - **it just depends on what goal you choose.** And once moral classifications are chosen, we can answer scientifically whether something adheres to it without being sidetracked by the question of whether a particular classification of morality is correct. With the understanding and acceptance of my thesis, we can transition moral philosophy into a science of morality.
(Emphasis mine.) This misses the point entirely. The whole point of morality is to decide on which goals you choose. Your algorithm is just a regular planning/control algorithm, no different from any planning algorithm we can make using known techniques. Overall, this paragraph says nothing else than "once we've all agreed about what is moral, we won't get sidetracked about what is moral". While this is true, this is also not particularly helpful. How do you propose we reach an agreement? On what basis? If it's arbitrary, *and yet people disagree*, we're just stuck.
At the end of the day, assuming we accept your thesis, we are still at step 0 of moral philosophy: how do we agree on the goals?
Back to concepts now.
>More generally - all concepts intended to represent reality are arbitrary, vague, and social constructs.
I don't think this statement really makes much
reddit
AI Responsibility
1495835792.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_di32txv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_di38b7x","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_di2wpnk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_di3r4u7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_fvw1s50","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]