Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also Ci/Cd, testing and deployment can't be done by AI itself. Make the app read…
ytc_UgwYni3kY…
G
“AI art is boring, let me explain” no one needs an explanation, we all know it s…
ytc_UgzBVgp3x…
G
Canadians on the internet generally don’t seem to like him. I think people just …
rdc_fn5nnos
G
WHATS SCARY IS WE ARE MORE WORRIED ABOUT AI THAN THE NIGHTMARE HAPPENING IN AMER…
ytc_UgxHneIel…
G
To some degree, sometimes holding a person responsible for being unconscious is …
rdc_hsmvjav
G
Really enjoyed the conversation but had one question that would loved Karen to a…
ytc_UgzhBL6Xz…
G
yeh, thanks Steven. very interesting... i suppose they become smarter even faste…
ytc_UgyTMsPjV…
G
Most People Say AI Will Take Over The World (Atleast The Memers) But This One Kn…
ytc_UgwXaoUEf…
Comment
Here's a question though, what happens when a self-driving car gets in an accident? Who is at fault? Wouldn't this open Google up to billions in lawsuits/liability?
reddit
AI Harm Incident
1455335978.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_czy5d00","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_czy82ur","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"rdc_oi3d0s8","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"rdc_oi25lks","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_efildd8","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]