Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also, use Version History in Google docs to show all your revisions. If OP actu…
rdc_kgqbtbh
G
Theseus ship is the perfect concept. We will never be able to say if a ai self c…
ytc_Ugy58_oLf…
G
Why can't we do both? Why are humans so single-tracked in their heads that they …
rdc_cdlwbl8
G
Well honestly, if your job can be replaced by automation, your job was a useless…
ytc_Ugwf_XjEw…
G
Aurora is selling ai that will directly replace human workers. He's, like, buyin…
ytc_UgxwFmQPV…
G
@monikasalt your same logic applies to ai art. you still need to know what makes…
ytr_UgxWbwGDH…
G
Yeah - honestly this is what automation was **supposed** to do. Let the value be…
rdc_hkfulp3
G
Text generators make stuff up all the time, not just sometimes. "Good" outputs h…
ytc_Ugz41ClDW…
Comment
In theory the ai might not have been active or aware that he had given his ID/ordered alcohol when it could become a problem.
reddit
AI Moral Status
1745254030.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi2vt60","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_oi16u28","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_oi1mkiw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_moa4j3i","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"rdc_moa56o0","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]