Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think you are wrong: you actually WANT an AI agent who takes the time to chat:…
ytc_UgxGpewEu…
G
I really only use it for idea visualization, sometimes the mistakes can be cool …
ytc_UgxZx7Sv3…
G
We need Fusion if we want to get wild with AI, but AI need Fusion too scaling it…
ytc_Ugwyg_w_D…
G
It's happening folks. This is the start of AI being used to frame people of crim…
ytc_UgyiNvC7Q…
G
And then there are still people keep saying AI is here to make our life better, …
ytc_UgzbdQ8YO…
G
Im uncultured so maybe im missing somehting. A camera is the exact same thing as…
ytc_Ugzss63vG…
G
To be fair, the dude would have come to the same conclusion if he'd googled; he …
ytc_UgwIvWsT4…
G
Absolutely, mental health is a crucial aspect of our well-being! In the video, S…
ytr_Ugxwdpa5K…
Comment
True, but this is just an excuse to go and sensor everything. Deepfakes are not advanced enough to be dangerous yet. And even if they were, you still need voice actors who can exactly replicate the subject, which is hard to almost impossible. So yeah, this is just an excuse to go and censor shit
reddit
AI Harm Incident
1580892031.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fgldeg7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_oi3uqve","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kk2yetk","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kk3fd7f","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kk2p6ks","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]