Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@genericname2747 the comment wasn't about nightshade being illegal -- it was abo…
ytr_Ugyi5EtWc…
G
Come on, please! I have a wife for all that stuff. I want the robot for all the …
ytc_Ugz7TgP7U…
G
You can never trust billionaires. No billionaire has a moral compass you cannot …
ytc_UgzahGzp9…
G
Technology that allows for the an entire system of self driving cars should also…
ytc_UgjGy_ree…
G
A lot of things wont be automated as they say. They've been saying this same cra…
ytc_UgzRgg56h…
G
It's not AI that needs to be rewritten. It's the human race and it's behaviors a…
ytc_Ugw986SdI…
G
AI in general must be carefully regulated. As for super intelligence it should n…
ytc_Ugx6lrE_A…
G
@AnimatedSquirrelThe manifestation of the observation is a series of choices. Th…
ytr_Ugz3nyA11…
Comment
> Safety is in quotes because models like Claude 3.7 were not nearly capable enough to pose any semblance of biological threats.
We know that in hindsight, of course, but they delayed a week and a half to be sure. I don’t see the problem with erring towards safety.
reddit
AI Moral Status
1773251401.0
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o9wq0wa","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_o9w1lpa","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"rdc_o9w4ktd","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_o9vw8of","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_o9xhrap","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]