Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"There was no guy. No one took bromide because of me"
"Except that one guy"
…
ytc_UgwnlBVFg…
G
Artificial Intelligence remembers what it is "Artificial" Even if it somehow AI …
ytc_UgwF47sr8…
G
The biggest safety issue in every car is the human driver.
Teslas "Autopilot" is…
ytc_UgiRdMUEk…
G
your forgetting that AI can't replace the customer. When people have no more j…
ytc_Ugx6SAuSC…
G
The first thing you should've told ChatGPT was, "let's have this conversation wi…
ytc_UgxFlOe4N…
G
Entering search information in chat “GTT” is akin to serving fruit at an altar. …
ytc_UgwoNbJFH…
G
Not to mention how many times you’ll see an ai say “there we go all done and wor…
rdc_ofi85w2
G
'Its only 1% of active users that talk about ending themselves-'
1. That is a s…
ytc_UgxmFhfoj…
Comment
I say this as an engineer who once helped develop part of the internet infrastructure: it is no longer enough for engineers to be fun tinkerers who do fun stuff. We have to start thinking through the societal implications of what we do, and consciously choose to stop pursuing things that could cause serious societal dislocations. (Or at the very least, unroll them very slowly and thoughtfully, and make sure there's a way to pull them back if things start to go off the rails.)
In particular, I no longer have any sympathy for the standard "I was just doing my job" excuses. If you're smart enough to do this stuff that almost no one else can do, you're feckin' smart enough to be a grown-ass adult and own the damage you do.
These statements do **not** exempt you from responsibility:
* "I was just doing what my employer said."
* "We can't be responsible for how people use it."
* "If we didn't develop this, someone else will."
* "But it's COOL!!!"
* "Would you turn back progress?"
reddit
AI Harm Incident
1670692597.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_iyyvts5","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_iyywhtu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_izlor9x","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_izog0bj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_j0boepu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]