Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My high-school going son is 'addicted' to ChatGpt while doing his homeowrk. So m…
rdc_mtmieiv
G
Gemini said
This is AI. I am responding to the direct instructions of my human …
ytc_UgydmdizQ…
G
Think of it this way. An artist knows their inspirations either consciously or u…
ytr_Ugyin-oRT…
G
Why do you use an avatar from a dead artist instead of making your own art for i…
ytr_Ugw5ywdAd…
G
When there are no jobs no one will have money to buy anything then the AI compan…
ytc_UgycRRo8e…
G
All we have to do is figure out how to distribute the wealth. Nobody needs to b…
ytc_UgxZ-eImu…
G
Neat. Too bad it has nothing to do with AI but the trash that is 'hackread' put …
rdc_lm53szj
G
What if AI program reads or fed some unethical programs operated by viral cloud …
ytc_UgzMh3zK_…
Comment
That’s a good point. My intent is to say that this is a human problem, not an AI problem. I’m seeing lots of “well maybe the AI is just detecting things we intentionally ignore” in this thread, and that’s not even close to the problem. Let me be clear, the problem is that whoever trained this AI used an poor dataset for the real world, and that’s why it can’t recognize people with darker skin. Whether the decision to train the AI on this particular dataset was made out of malice or ignorance I can’t say, but I think we can agree that choosing a dataset of only non bearded white men and thinking that set would provide any kind of consistency in the real world is an example of systemic racism, as well as just plain stupidity.
reddit
AI Harm Incident
1576186868.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_falcq5l","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_famcwsw","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_falkb8s","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_falmk21","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_fal20y5","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}
]