Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So seems like an eye for an eye situations. this is when we start making deep fa…
ytc_UgxS5ubx2…
G
Ai art is to me like modern houses architecture. It's plain, has no soul or huma…
ytc_UgzmY6XEU…
G
It's only a matter of time before one of those trucks gets into a crash and at t…
ytc_UgyuBcEm-…
G
@name-nam Yes, that's a point I can agree with. I don't support the stealing of…
ytr_UgwTyWaRd…
G
"no one is stopping you from making art, so why are you still complaining about …
ytc_UgwzauZMW…
G
I have noticed that many YT Shorts and even some videos are beginning to have th…
ytc_UgzdFDrFz…
G
At this point it doesn't matter. Even "IF" the US, NATO, Europe and the allies d…
ytc_Ugy0-f4jz…
G
Just a thought here on the concern about AI taking jobs away from humans. This m…
ytc_UgwMjUYLc…
Comment
When a shitty driver harms someone you can revoke their license and take them off the road, or when warranted put them in prison. You can't arrest a corporation (despite the allegation that they're "people") nor can you revoke the driver's license of a software.
Which leads to another question- if you or I make a mistake and get caught, we accrue points on our license, we pay for it not just in fines but with increased insurance premiums for years. What happens to the corporations running self driving cars? Maybe they catch a fine, which is virtually meaningless (a few hundred bucks sends a message to you or me, a few hundred bucks isn't even worth the time to mail a check for Google,) but they aren't suffering the same long term consequences that you or I are. Is it right that we can be held accountable and punished for making human mistakes, but a software that was programmed by humans, operated by a corporation composed of humans, has no equivalent penalty?
reddit
AI Harm Incident
1765249637.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt1uvg2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_nsyms9i","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_nszt0pf","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"rdc_nt0khwl","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_nszu2t5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]