Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
**Press Conference Summary, 31 March 2014**
* Announced that Malaysia's prime m…
rdc_cggc7h9
G
Just want to clarify, I’m not flaming you for being an ai artist. The issue here…
ytr_UgzzQ1qjU…
G
The enviromental cost and the fact that gen AI is planned to be and already bein…
ytc_UgymYjeAL…
G
Its ridiculous to think that AI will be applied everywhere instantly.
Every new…
ytc_UgxcIuHeF…
G
There is nothing unethical about AI art. It doesn't copy individual people. It i…
ytc_UgykxVQTr…
G
Most of Hollywood actors and actresses are arrogant; lacking class and moral.
M…
ytc_UgxIAjqXc…
G
I hate AI.
I got my first job as a junior dev one year ago. Back then, we were …
ytc_UgzCXGueo…
G
I know what's going on I know everything about everything yall just to dense to …
ytc_UgygpymOZ…
Comment
Within our life times it will be impossible to tell if CSAM is created by an AI or not.
Currently we prosecute people for possession of CSAM even when we can't locate the child in the image, that simply won't be possible in a world where it is legal to possess AI generated CSAM.
Even if the victim is real, you couldn't prove beyond a reasonable doubt that the person in possession of the images knew that, they could have thought they were looking at perfectly legal AI creations.
There's a huge legal can of worms opened by allowing photorealistic depictions of CSAM.
reddit
AI Harm Incident
1695589552.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_k20eafg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_k20hk66","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_k1zwre5","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_k21qrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"rdc_k1zxpcu","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]