Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you wanna know why she did it? I can tell you why. Because think about it, you h…
ytc_UgxP9SYa1…
G
I could have commented this myself. AI’s sole feature is to convince as many peo…
ytr_Ugym8tOhY…
G
This was true a few months ago, now it is a bad take. AI is already making contr…
rdc_oh3nh2r
G
A couple of things here.
1. No one discusses the "breakthrough" or what have yo…
ytc_UgyJCXOb6…
G
I personally hope that given how disillusioned most millennials and Gen Z are wi…
ytc_UgxHChU_b…
G
They'll become overly reliant on it, medical developments and research will stag…
ytc_Ugzttq2_B…
G
Thank you for sharing your observation! It's interesting how Sophia's voice and …
ytr_Ugw7uIDVP…
G
Learn there's NO expectation no perfection art IS messy art IS hard art can let …
ytc_UgzYMJZxT…
Comment
Anthropic’s “Constitution” for its AI model Claude is full of hyperbole, its primary author Amanda Askell a PhD in analytical philosophy believing this LLM is a novel entity, has a soul, is a person, has various capabilities suggestive of consciousness and conscience, mixing up various ethical constraints from virtue ethics, utilitarianism, and deontology without concern for contradictions. Anthropic has no check on her moral intuitions, her practical rationality, her slate of “human values”, and so on that is absurdly problematic in that document. Having listened to her in various interviews I can’t help but wonder how she was hired for her role since she has limited experience even as a philosopher. I balk at the hyperbole. There’s more strategic ambiguity in the document than there is plausibility.
youtube
2026-02-13T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoGEzrZ04dH0QSKKl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbIcGmgsKA7hhuvfx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8mZpln6KfYEXT9CB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwhygt0NbliESaY0Vp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyifskYkxF13r9UCbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyPFg3mI6ySGPUOb254AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGzuwAAaFrosw9X7J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwa-R1JxLYIe496Upt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHu-fRYwhE7h4YTKR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHmwZ5uJqK5vnHGmB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]