Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@CarlosBenjaminthis is the tip of the iceberg you find on free/paid deepfake web…
ytr_UgxSHXs9w…
G
I think I understand the reasoning behind AI "artists" mentality, and it's roote…
ytc_UgyJQaDI1…
G
That's not a driverless problem, that's a corrupt low trust society problem, tha…
ytr_Ugzb_Ahlj…
G
I dont agree at all. If (more like when) AI fails, everyone will eventually get …
ytc_UgwzCtbGD…
G
its not “blue blood”, it is just hard work and dedication. people like this dont…
ytc_UgyIVwl-H…
G
Imagine this robot, sat next to you and said what she said here: 0:34 - 0:55…
ytc_Uggc__3uh…
G
Maybe we just need to get an ai girlfriend, to save yourself but you better be l…
ytc_Ugw50o7C5…
G
I thought yall aren’t kindergartners, but yall keep on associating ChatGPT to be…
ytc_UgzDJmBRH…
Comment
Rather than the fear-theatre, perhaps you could help dispel some of the underlying causes for AI alignment concerns?
Why does alignment failure appear mechanistically rather than probabilistically?
How does RLHF and LSTM networks shape prior distributions?
Why do Base models behave oddly without any safety layers?
What is "Sampling Strategy"?
How logits get warped by System Prompts.
Why fine-tuning breaks or overrides a previous constraint?
How is training data contaminated to produce pathological associations?
How should we interpret model outputs in a non anthropomorphic manner?
youtube
AI Moral Status
2025-12-11T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfkJjjmVroB0IM8LF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw4tnbRxkkmSrEfjLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztoMsIJds3l5aPyIl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyITxXnLlFhOAHKGBJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9KFesjcJsMSNVfIt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfcWx1_nPsEF855VB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzFKMJ4CLvmP3uxEOJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWYagWJhiFibWNY9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyQ1L00vAevLsr3o6Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwKzdJA9OgIy7LnGJ54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]