Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is where I am at. How certain are we of the collapse if nothing is certain?…
rdc_dxg7cdb
G
A fundamental gap in the entire argument, that is, a part of possibility missing…
ytc_UgzXEsRob…
G
I also need to point out, contrary to popular belief, teacher dont just teach te…
ytr_UgxNyQdx8…
G
He's not working in AI for the money 😂 he has no reason to need to mislead anyon…
ytr_Ugymf031Y…
G
Executives at my company pulled me and a handful of other senior tech leaders in…
ytc_Ugyn3ZBSc…
G
The assumption behind this analysis seems to be that there is in fact a "mind" b…
ytc_Ugyx1P7U2…
G
For every 10,000 Americans who lose their job to A.I., a politician should lose …
ytc_UgwK4tAP-…
G
@thelifeofdclay5815a personal trainer they can definitely replace 😂😂, also nutri…
ytr_Ugz_Su_89…
Comment
The essay isn’t a review of AI security literature because it isn’t making an AI security argument. It’s making a philosophical argument about the conceptual structure of alignment and containment—specifically that they cannot be treated as independent problems if the entity in question is a Gewirthian agent. That’s a claim about the *logic* of the situation, not about current engineering practice.
On urgency—the essay agrees with you that the question is conditional. But “not urgent now” is doing a lot of work. The argument is that if we defer the philosophical groundwork until the question *is* urgent, we’ll be trying to build the framework under crisis conditions with an entity that outmatches us cognitively. The point of working through the conditional now is precisely so we’re not improvising later.
On the conflation charge—the argument is that these *cannot* be cleanly separated if the entity is an agent, and that treating them as separate is what produces incoherence. If ASI is not an agent, then yes, they’re fully independent: one is engineering, the other is speculative ethics, and there’s no wreck to speak of. But if it *is* an agent, then your containment solution implicates the moral framework your alignment solution depends on. The “smashing together” isn’t something the essay does to the problems—it’s something the essay argues is already the case under the agency condition. Whether that’s profound or not is up to the reader, but it’s simply the thesis of the piece, not some slapdash conflation.
reddit
AI Moral Status
1775223214.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]