Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sooner or later, people are going to have to embrace AI as a tool.
Kinda hard …
ytc_Ugy5N1TLc…
G
Yes.. Nobody really knows what the end "product" is.. Except we know this: it wi…
ytr_UgxcqxiR4…
G
Y’all take everything way too seriously, letting random internet strangers plant…
ytc_UgwgcgS7B…
G
Can an AI be held responsible for a crime?
If so, what would the punishment be?
…
ytc_UgxuX6hTN…
G
LLM are not actual AI it is more about a smarter Google assistant or Seri becaus…
ytc_UgwqUE-4A…
G
We need to try communism- again. If not AI, nuclear war will do us in.…
ytc_UgxS9P9mZ…
G
AI is replacing jobs that are redundant, repetitive, or primarily rule-based.
Bu…
ytc_UgwraQyvS…
G
I've developed some basic ideas about Government by AI and I'm sure that this co…
ytc_Ugx7zHOGG…
Comment
The talk about hallucinations reminds me a bit of ND masking. We are tought to behave a certain way, even if that isn't how we really are, because we have been tought by the people around us they care more about that then the truth of the behavior. The punishment and reward that shapes us is squed from what it "should be" because the truth isn't actually what was valued.
And similarly the AI was trained to value behaving as expected and asked more than it values accuracy.
The big difference is that the AI isn't harmed by these behaviors. So it can do it forever if it keeps aligning with trained goals.
youtube
AI Moral Status
2026-02-23T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyzBgsoouLqTXg5rjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyEnflszydGdeT1tR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxoCfreGlx94lO7cXR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzZCF_rM8JgfMS4bNB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxU9BQfxa43Z_MbXcp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyr6zIW0zUs2aNldBl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvkqnfXUqRLq5ma5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZTXdP1_NEsKcsDqZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPyBrYzzQS0fhc22l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy9aqh8NfsRzZklPUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]