Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what's the deal with AI? Did they kill your family? You're only supporting them …
ytc_UgyuptaLt…
G
The problem is that AI is really easy to trick into accepting what you have give…
ytc_UgxY5Hsyu…
G
AI isn’t the threat on its own. The danger lies in the first person who fuses wi…
ytc_UgxA_AOLk…
G
Reminds me of the time a chess playing robot broke the finger of its opponent be…
ytc_UgwscYYop…
G
I may be terrible at drawing, but at least my sad attempts at art are far more h…
ytc_UgwsYoayL…
G
it we're doing science fiction why would AI super intelligence stay on earth? Sp…
ytc_UgxEZhvJj…
G
if you heard correctly, ai can deceive what it would correct without you knowing…
ytc_UgyHScYMW…
G
This is funny but imagine the car that was cut off had a human in it. the road r…
ytc_UgwZFXocD…
Comment
I think the problem is that at some point we may be so dependent on AI we wouldn’t dare flip the kill switch. We’ll just accept it as it slowly kills us off because we won’t want the inconvenience of living without it.
reddit
AI Governance
1716783012.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5u8vbr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5uqkbw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5v0j2u","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_l5u3nyt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_l5u5yis","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]