Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder what evil SOB came up with this ad, probably some moron with military g…
ytc_UgzzkuS5X…
G
@Chris-xo2rq Frankly I think A.I. is far less likely to become sentient than it …
ytr_UgyrknVmt…
G
i cristisize this idea highly. farmer knows what he should do what is this nonse…
ytc_UgxjAtnaj…
G
You don't understand how LLM work or image generation, you are the one being sil…
ytr_UgwX5VJil…
G
@beepbeeparoniIt doesn’t steal; It uses other arts for reference so that it can …
ytr_UgyUrB09r…
G
Neil understands AI implications on a general level, but clearly has no depth of…
ytc_UgyCPvIw4…
G
That robot just basically said at the end, "I don't want you humans using AI to …
ytc_UgxBAFCgt…
G
It is happening. AI is an ineffective tool employers are using to cut the barga…
ytc_UgwStDeeE…
Comment
This is (yet another) massive security breach in the making. LLMs are plagiarism machines, inputs into the system can be "memorized" by future models. This can easily lead to sensitive information being leaked later.
reddit
AI Responsibility
1740432232.0
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_melasmv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_melzqo2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_mel7mmw","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_melamkk","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_melas4k","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]