Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I had trouble telling but what gave it away to me was the fact that the eyes had…
ytc_UgyC3eZZm…
G
Try to do real deep debugging sessions with AI and you will understand the probl…
ytc_UgwB_U1eU…
G
They love to use disabled people as shields, but when we talk about how we don't…
ytc_UgzyIohEJ…
G
Hey pal there’s a big difference between a tool and the present artificial intel…
ytc_UgxdDTYH1…
G
for me, i LOVE using ai for dumb stuff like making parodies of "the (blank) situ…
ytc_Ugy8TJx79…
G
So, I've been making this analogy for like 2 years now, and I've had a lot of pe…
rdc_n7midxq
G
People need to stop using any AI period. We do not need any of it.…
ytc_UgwJrodgs…
G
Why do you all think Ai art is theft? All of the responses im seeing is that its…
ytc_UgySzBaob…
Comment
But if a person does not use their will, does that mean they are actively just choosing to be abhorrent? As well as more often than not, make themselves and others completely miserable as a consequence of not actively changing their own lives?
To me, that sounds more like madness. I often reference the idea that if we saw a cow who had two paths, one leads to a nice grass lunch outside and the other leads to an automated slaughter. The cow definitely knows what path leads where, and for some reason chooses the path of slaughter. Instead of saying the cow was stupid or is getting what they deserve, they'd think the cow had something wrong with it.
What would cause a person to not use their own will? It goes against pretty much everything we know about human survival to keep participating in behaviors that are deteriorating their quality of life, say obese people, smokers, drug addicts, lazy, depressed etc.
reddit
AI Moral Status
1489690999.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_deu9nnb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_deuhlhr","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_df0t0y9","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_deun3tr","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_df0noc6","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]