Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He tried moving away from AI. It took a long time to make episodes, so he now us…
ytc_UgyyPTtXJ…
G
“History repeats itself. The new technology will create new opportunities”
<…
rdc_jrosqgb
G
I used Palantir in Afghanistan in 2012 during OEF. Biometrics too. I knew it wou…
ytc_Ugz0RINAw…
G
Exactly. Calling it "AI art" is giving it far too much credit. Art is, by defini…
ytr_UgyC6J9tc…
G
> buy
I don't think that's going to matter a whole hell of a lot if you have…
rdc_emo2koi
G
>Nothing available so far is going to let these things drive in a snowstorm. …
rdc_czxumr2
G
bro fuck ai what are we talking about it's not even real ai it's machine learnin…
ytc_Ugww01We4…
G
*If a teacher is really involved in teaching it automatically Makes Students eng…
ytc_UgwCwe2o7…
Comment
The fundamental problem I have with that is that we are trying to make an algorithm act “moral” and morality is such a subjective thing. That’s a tall order for an algorithm. Besides, who gets to decide what is okay and what is not okay?
reddit
AI Harm Incident
1628624424.0
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_h8g4uu5","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_h8g9znv","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_h8f8jgl","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"rdc_h8fs867","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_h8g4uyh","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"}
]